Sample records for predicts basic statistics

  1. From Research to Practice: Basic Mathematics Skills and Success in Introductory Statistics

    ERIC Educational Resources Information Center

    Lunsford, M. Leigh; Poplin, Phillip

    2011-01-01

    Based on previous research of Johnson and Kuennen (2006), we conducted a study to determine factors that would possibly predict student success in an introductory statistics course. Our results were similar to Johnson and Kuennen in that we found students' basic mathematical skills, as measured on a test created by Johnson and Kuennen, were a…

  2. The development and validation of the AMPREDICT model for predicting mobility outcome after dysvascular lower extremity amputation.

    PubMed

    Czerniecki, Joseph M; Turner, Aaron P; Williams, Rhonda M; Thompson, Mary Lou; Landry, Greg; Hakimi, Kevin; Speckman, Rebecca; Norvell, Daniel C

    2017-01-01

    The objective of this study was the development of AMPREDICT-Mobility, a tool to predict the probability of independence in either basic or advanced (iBASIC or iADVANCED) mobility 1 year after dysvascular major lower extremity amputation. Two prospective cohort studies during consecutive 4-year periods (2005-2009 and 2010-2014) were conducted at seven medical centers. Multiple demographic and biopsychosocial predictors were collected in the periamputation period among individuals undergoing their first major amputation because of complications of peripheral arterial disease or diabetes. The primary outcomes were iBASIC and iADVANCED mobility, as measured by the Locomotor Capabilities Index. Combined data from both studies were used for model development and internal validation. Backwards stepwise logistic regression was used to develop the final prediction models. The discrimination and calibration of each model were assessed. Internal validity of each model was assessed with bootstrap sampling. Twelve-month follow-up was reached by 157 of 200 (79%) participants. Among these, 54 (34%) did not achieve iBASIC mobility, 103 (66%) achieved at least iBASIC mobility, and 51 (32%) also achieved iADVANCED mobility. Predictive factors associated with reduced odds of achieving iBASIC mobility were increasing age, chronic obstructive pulmonary disease, dialysis, diabetes, prior history of treatment for depression or anxiety, and very poor to fair self-rated health. Those who were white, were married, and had at least a high-school degree had a higher probability of achieving iBASIC mobility. The odds of achieving iBASIC mobility increased with increasing body mass index up to 30 kg/m 2 and decreased with increasing body mass index thereafter. The prediction model of iADVANCED mobility included the same predictors with the exception of diabetes, chronic obstructive pulmonary disease, and education level. Both models showed strong discrimination with C statistics of 0.85 and 0.82, respectively. The mean difference in predicted probabilities for those who did and did not achieve iBASIC and iADVANCED mobility was 33% and 29%, respectively. Tests for calibration and observed vs predicted plots suggested good fit for both models; however, the precision of the estimates of the predicted probabilities was modest. Internal validation through bootstrapping demonstrated some overoptimism of the original model development, with the optimism-adjusted C statistic for iBASIC and iADVANCED mobility being 0.74 and 0.71, respectively, and the discrimination slope 19% and 16%, respectively. AMPREDICT-Mobility is a user-friendly prediction tool that can inform the patient undergoing a dysvascular amputation and the patient's provider about the probability of independence in either basic or advanced mobility at each major lower extremity amputation level. Copyright © 2016 Society for Vascular Surgery. All rights reserved.

  3. Predicting Success in Psychological Statistics Courses.

    PubMed

    Lester, David

    2016-06-01

    Many students perform poorly in courses on psychological statistics, and it is useful to be able to predict which students will have difficulties. In a study of 93 undergraduates enrolled in Statistical Methods (18 men, 75 women; M age = 22.0 years, SD = 5.1), performance was significantly associated with sex (female students performed better) and proficiency in algebra in a linear regression analysis. Anxiety about statistics was not associated with course performance, indicating that basic mathematical skills are the best correlate for performance in statistics courses and can usefully be used to stream students into classes by ability. © The Author(s) 2016.

  4. Calculation of precise firing statistics in a neural network model

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2017-08-01

    A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.

  5. A Simple Statistical Thermodynamics Experiment

    ERIC Educational Resources Information Center

    LoPresto, Michael C.

    2010-01-01

    Comparing the predicted and actual rolls of combinations of both two and three dice can help to introduce many of the basic concepts of statistical thermodynamics, including multiplicity, probability, microstates, and macrostates, and demonstrate that entropy is indeed a measure of randomness, that disordered states (those of higher entropy) are…

  6. Health Literacy Impact on National Healthcare Utilization and Expenditure.

    PubMed

    Rasu, Rafia S; Bawa, Walter Agbor; Suminski, Richard; Snella, Kathleen; Warady, Bradley

    2015-08-17

    Health literacy presents an enormous challenge in the delivery of effective healthcare and quality outcomes. We evaluated the impact of low health literacy (LHL) on healthcare utilization and healthcare expenditure. Database analysis used Medical Expenditure Panel Survey (MEPS) from 2005-2008 which provides nationally representative estimates of healthcare utilization and expenditure. Health literacy scores (HLSs) were calculated based on a validated, predictive model and were scored according to the National Assessment of Adult Literacy (NAAL). HLS ranged from 0-500. Health literacy level (HLL) and categorized in 2 groups: Below basic or basic (HLS <226) and above basic (HLS ≥226). Healthcare utilization expressed as a physician, nonphysician, or emergency room (ER) visits and healthcare spending. Expenditures were adjusted to 2010 rates using the Consumer Price Index (CPI). A P value of 0.05 or less was the criterion for statistical significance in all analyses. Multivariate regression models assessed the impact of the predicted HLLs on outpatient healthcare utilization and expenditures. All analyses were performed with SAS and STATA® 11.0 statistical software. The study evaluated 22 599 samples representing 503 374 648 weighted individuals nationally from 2005-2008. The cohort had an average age of 49 years and included more females (57%). Caucasian were the predominant racial ethnic group (83%) and 37% of the cohort were from the South region of the United States of America. The proportion of the cohort with basic or below basic health literacy was 22.4%. Annual predicted values of physician visits, nonphysician visits, and ER visits were 6.6, 4.8, and 0.2, respectively, for basic or below basic compared to 4.4, 2.6, and 0.1 for above basic. Predicted values of office and ER visits expenditures were $1284 and $151, respectively, for basic or below basic and $719 and $100 for above basic (P < .05). The extrapolated national estimates show that the annual costs for prescription alone for adults with LHL possibly associated with basic and below basic health literacy could potentially reach about $172 billion. Health literacy is inversely associated with healthcare utilization and expenditure. Individuals with below basic or basic HLL have greater healthcare utilization and expendituresspending more on prescriptions compared to individuals with above basic HLL. Public health strategies promoting appropriate education among individuals with LHL may help to improve health outcomes and reduce unnecessary healthcare visits and costs. © 2015 by Kerman University of Medical Sciences.

  7. Investigation of energy management strategies for photovoltaic systems - A predictive control algorithm

    NASA Technical Reports Server (NTRS)

    Cull, R. C.; Eltimsahy, A. H.

    1983-01-01

    The present investigation is concerned with the formulation of energy management strategies for stand-alone photovoltaic (PV) systems, taking into account a basic control algorithm for a possible predictive, (and adaptive) controller. The control system controls the flow of energy in the system according to the amount of energy available, and predicts the appropriate control set-points based on the energy (insolation) available by using an appropriate system model. Aspects of adaptation to the conditions of the system are also considered. Attention is given to a statistical analysis technique, the analysis inputs, the analysis procedure, and details regarding the basic control algorithm.

  8. A Quantile Regression Approach to Understanding the Relations among Morphological Awareness, Vocabulary, and Reading Comprehension in Adult Basic Education Students

    ERIC Educational Resources Information Center

    Tighe, Elizabeth L.; Schatschneider, Christopher

    2016-01-01

    The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in adult basic education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological…

  9. Information-Theoretic Properties of Auditory Sequences Dynamically Influence Expectation and Memory

    ERIC Educational Resources Information Center

    Agres, Kat; Abdallah, Samer; Pearce, Marcus

    2018-01-01

    A basic function of cognition is to detect regularities in sensory input to facilitate the prediction and recognition of future events. It has been proposed that these implicit expectations arise from an internal predictive coding model, based on knowledge acquired through processes such as statistical learning, but it is unclear how different…

  10. Basic Mathematics Test Predicts Statistics Achievement and Overall First Year Academic Success

    ERIC Educational Resources Information Center

    Fonteyne, Lot; De Fruyt, Filip; Dewulf, Nele; Duyck, Wouter; Erauw, Kris; Goeminne, Katy; Lammertyn, Jan; Marchant, Thierry; Moerkerke, Beatrijs; Oosterlinck, Tom; Rosseel, Yves

    2015-01-01

    In the psychology and educational science programs at Ghent University, only 36.1% of the new incoming students in 2011 and 2012 passed all exams. Despite availability of information, many students underestimate the scientific character of social science programs. Statistics courses are a major obstacle in this matter. Not all enrolling students…

  11. Comparing early signs and basic symptoms as methods for predicting psychotic relapse in clinical practice.

    PubMed

    Eisner, Emily; Drake, Richard; Lobban, Fiona; Bucci, Sandra; Emsley, Richard; Barrowclough, Christine

    2018-02-01

    Early signs interventions show promise but could be further developed. A recent review suggested that 'basic symptoms' should be added to conventional early signs to improve relapse prediction. This study builds on preliminary evidence that basic symptoms predict relapse and aimed to: 1. examine which phenomena participants report prior to relapse and how they describe them; 2. determine the best way of identifying pre-relapse basic symptoms; 3. assess current practice by comparing self- and casenote-reported pre-relapse experiences. Participants with non-affective psychosis were recruited from UK mental health services. In-depth interviews (n=23), verbal checklists of basic symptoms (n=23) and casenote extracts (n=208) were analysed using directed content analysis and non-parametric statistical tests. Three-quarters of interviewees reported basic symptoms and all reported conventional early signs and 'other' pre-relapse experiences. Interviewees provided rich descriptions of basic symptoms. Verbal checklist interviews asking specifically about basic symptoms identified these experiences more readily than open questions during in-depth interviews. Only 5% of casenotes recorded basic symptoms; interviewees were 16 times more likely to report basic symptoms than their casenotes did. The majority of interviewees self-reported pre-relapse basic symptoms when asked specifically about these experiences but very few casenotes reported these symptoms. Basic symptoms may be potent predictors of relapse that clinicians miss. A self-report measure would aid monitoring of basic symptoms in routine clinical practice and would facilitate a prospective investigation comparing basic symptoms and conventional early signs as predictors of relapse. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  12. [How reliable is the monitoring for doping?].

    PubMed

    Hüsler, J

    1990-12-01

    The reliability of the dope control, of the chemical analysis of the urine probes in the accredited laboratories and their decisions, is discussed using probabilistic and statistical methods. Basically, we evaluated and estimated the positive predictive value which means the probability that an urine probe contains prohibited dope substances given a positive test decision. Since there are not statistical data and evidence for some important quantities in relation to the predictive value, an exact evaluation is not possible, only conservative, lower bounds can be given. We found that the predictive value is at least 90% or 95% with respect to the analysis and decision based on the A-probe only, and at least 99% with respect to both A- and B-probes. A more realistic observation, but without sufficient statistical confidence, points to the fact that the true predictive value is significantly larger than these lower estimates.

  13. Evaluation of Fast-Time Wake Vortex Prediction Models

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.; Hamilton, David W.

    2009-01-01

    Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.

  14. CORSSA: Community Online Resource for Statistical Seismicity Analysis

    NASA Astrophysics Data System (ADS)

    Zechar, J. D.; Hardebeck, J. L.; Michael, A. J.; Naylor, M.; Steacy, S.; Wiemer, S.; Zhuang, J.

    2011-12-01

    Statistical seismology is critical to the understanding of seismicity, the evaluation of proposed earthquake prediction and forecasting methods, and the assessment of seismic hazard. Unfortunately, despite its importance to seismology-especially to those aspects with great impact on public policy-statistical seismology is mostly ignored in the education of seismologists, and there is no central repository for the existing open-source software tools. To remedy these deficiencies, and with the broader goal to enhance the quality of statistical seismology research, we have begun building the Community Online Resource for Statistical Seismicity Analysis (CORSSA, www.corssa.org). We anticipate that the users of CORSSA will range from beginning graduate students to experienced researchers. More than 20 scientists from around the world met for a week in Zurich in May 2010 to kick-start the creation of CORSSA: the format and initial table of contents were defined; a governing structure was organized; and workshop participants began drafting articles. CORSSA materials are organized with respect to six themes, each will contain between four and eight articles. CORSSA now includes seven articles with an additional six in draft form along with forums for discussion, a glossary, and news about upcoming meetings, special issues, and recent papers. Each article is peer-reviewed and presents a balanced discussion, including illustrative examples and code snippets. Topics in the initial set of articles include: introductions to both CORSSA and statistical seismology, basic statistical tests and their role in seismology; understanding seismicity catalogs and their problems; basic techniques for modeling seismicity; and methods for testing earthquake predictability hypotheses. We have also begun curating a collection of statistical seismology software packages.

  15. 10 CFR 431.17 - Determination of efficiency.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... characteristics of that basic model, and (ii) Based on engineering or statistical analysis, computer simulation or... simulation or modeling, and other analytic evaluation of performance data on which the AEDM is based... applied. (iii) If requested by the Department, the manufacturer shall conduct simulations to predict the...

  16. 10 CFR 431.17 - Determination of efficiency.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... characteristics of that basic model, and (ii) Based on engineering or statistical analysis, computer simulation or... simulation or modeling, and other analytic evaluation of performance data on which the AEDM is based... applied. (iii) If requested by the Department, the manufacturer shall conduct simulations to predict the...

  17. The Introductory Sociology Survey

    ERIC Educational Resources Information Center

    Best, Joel

    1977-01-01

    The Introductory Sociology Survey (ISS) is designed to teach introductory students basic skills in developing causal arguments and in using a computerized statistical package to analyze survey data. Students are given codebooks for survey data and asked to write a brief paper predicting the relationship between at least two variables. (Author)

  18. Building the Community Online Resource for Statistical Seismicity Analysis (CORSSA)

    NASA Astrophysics Data System (ADS)

    Michael, A. J.; Wiemer, S.; Zechar, J. D.; Hardebeck, J. L.; Naylor, M.; Zhuang, J.; Steacy, S.; Corssa Executive Committee

    2010-12-01

    Statistical seismology is critical to the understanding of seismicity, the testing of proposed earthquake prediction and forecasting methods, and the assessment of seismic hazard. Unfortunately, despite its importance to seismology - especially to those aspects with great impact on public policy - statistical seismology is mostly ignored in the education of seismologists, and there is no central repository for the existing open-source software tools. To remedy these deficiencies, and with the broader goal to enhance the quality of statistical seismology research, we have begun building the Community Online Resource for Statistical Seismicity Analysis (CORSSA). CORSSA is a web-based educational platform that is authoritative, up-to-date, prominent, and user-friendly. We anticipate that the users of CORSSA will range from beginning graduate students to experienced researchers. More than 20 scientists from around the world met for a week in Zurich in May 2010 to kick-start the creation of CORSSA: the format and initial table of contents were defined; a governing structure was organized; and workshop participants began drafting articles. CORSSA materials are organized with respect to six themes, each containing between four and eight articles. The CORSSA web page, www.corssa.org, officially unveiled on September 6, 2010, debuts with an initial set of approximately 10 to 15 articles available online for viewing and commenting with additional articles to be added over the coming months. Each article will be peer-reviewed and will present a balanced discussion, including illustrative examples and code snippets. Topics in the initial set of articles will include: introductions to both CORSSA and statistical seismology, basic statistical tests and their role in seismology; understanding seismicity catalogs and their problems; basic techniques for modeling seismicity; and methods for testing earthquake predictability hypotheses. A special article will compare and review available statistical seismology software packages.

  19. Estimation and Compression over Large Alphabets

    ERIC Educational Resources Information Center

    Acharya, Jayadev

    2014-01-01

    Compression, estimation, and prediction are basic problems in Information theory, statistics and machine learning. These problems have been extensively studied in all these fields, though the primary focus in a large portion of the work has been on understanding and solving the problems in the asymptotic regime, "i.e." the alphabet size…

  20. Prediction model to estimate presence of coronary artery disease: retrospective pooled analysis of existing cohorts

    PubMed Central

    Genders, Tessa S S; Steyerberg, Ewout W; Nieman, Koen; Galema, Tjebbe W; Mollet, Nico R; de Feyter, Pim J; Krestin, Gabriel P; Alkadhi, Hatem; Leschka, Sebastian; Desbiolles, Lotus; Meijs, Matthijs F L; Cramer, Maarten J; Knuuti, Juhani; Kajander, Sami; Bogaert, Jan; Goetschalckx, Kaatje; Cademartiri, Filippo; Maffei, Erica; Martini, Chiara; Seitun, Sara; Aldrovandi, Annachiara; Wildermuth, Simon; Stinn, Björn; Fornaro, Jürgen; Feuchtner, Gudrun; De Zordo, Tobias; Auer, Thomas; Plank, Fabian; Friedrich, Guy; Pugliese, Francesca; Petersen, Steffen E; Davies, L Ceri; Schoepf, U Joseph; Rowe, Garrett W; van Mieghem, Carlos A G; van Driessche, Luc; Sinitsyn, Valentin; Gopalan, Deepa; Nikolaou, Konstantin; Bamberg, Fabian; Cury, Ricardo C; Battle, Juan; Maurovich-Horvat, Pál; Bartykowszki, Andrea; Merkely, Bela; Becker, Dávid; Hadamitzky, Martin; Hausleiter, Jörg; Dewey, Marc; Zimmermann, Elke; Laule, Michael

    2012-01-01

    Objectives To develop prediction models that better estimate the pretest probability of coronary artery disease in low prevalence populations. Design Retrospective pooled analysis of individual patient data. Setting 18 hospitals in Europe and the United States. Participants Patients with stable chest pain without evidence for previous coronary artery disease, if they were referred for computed tomography (CT) based coronary angiography or catheter based coronary angiography (indicated as low and high prevalence settings, respectively). Main outcome measures Obstructive coronary artery disease (≥50% diameter stenosis in at least one vessel found on catheter based coronary angiography). Multiple imputation accounted for missing predictors and outcomes, exploiting strong correlation between the two angiography procedures. Predictive models included a basic model (age, sex, symptoms, and setting), clinical model (basic model factors and diabetes, hypertension, dyslipidaemia, and smoking), and extended model (clinical model factors and use of the CT based coronary calcium score). We assessed discrimination (c statistic), calibration, and continuous net reclassification improvement by cross validation for the four largest low prevalence datasets separately and the smaller remaining low prevalence datasets combined. Results We included 5677 patients (3283 men, 2394 women), of whom 1634 had obstructive coronary artery disease found on catheter based coronary angiography. All potential predictors were significantly associated with the presence of disease in univariable and multivariable analyses. The clinical model improved the prediction, compared with the basic model (cross validated c statistic improvement from 0.77 to 0.79, net reclassification improvement 35%); the coronary calcium score in the extended model was a major predictor (0.79 to 0.88, 102%). Calibration for low prevalence datasets was satisfactory. Conclusions Updated prediction models including age, sex, symptoms, and cardiovascular risk factors allow for accurate estimation of the pretest probability of coronary artery disease in low prevalence populations. Addition of coronary calcium scores to the prediction models improves the estimates. PMID:22692650

  1. Statistical-mechanical predictions and Navier-Stokes dynamics of two-dimensional flows on a bounded domain.

    PubMed

    Brands, H; Maassen, S R; Clercx, H J

    1999-09-01

    In this paper the applicability of a statistical-mechanical theory to freely decaying two-dimensional (2D) turbulence on a bounded domain is investigated. We consider an ensemble of direct numerical simulations in a square box with stress-free boundaries, with a Reynolds number that is of the same order as in experiments on 2D decaying Navier-Stokes turbulence. The results of these simulations are compared with the corresponding statistical equilibria, calculated from different stages of the evolution. It is shown that the statistical equilibria calculated from early times of the Navier-Stokes evolution do not correspond to the dynamical quasistationary states. At best, the global topological structure is correctly predicted from a relatively late time in the Navier-Stokes evolution, when the quasistationary state has almost been reached. This failure of the (basically inviscid) statistical-mechanical theory is related to viscous dissipation and net leakage of vorticity in the Navier-Stokes dynamics at moderate values of the Reynolds number.

  2. Reinventing Biostatistics Education for Basic Scientists

    PubMed Central

    Weissgerber, Tracey L.; Garovic, Vesna D.; Milin-Lazovic, Jelena S.; Winham, Stacey J.; Obradovic, Zoran; Trzeciakowski, Jerome P.; Milic, Natasa M.

    2016-01-01

    Numerous studies demonstrating that statistical errors are common in basic science publications have led to calls to improve statistical training for basic scientists. In this article, we sought to evaluate statistical requirements for PhD training and to identify opportunities for improving biostatistics education in the basic sciences. We provide recommendations for improving statistics training for basic biomedical scientists, including: 1. Encouraging departments to require statistics training, 2. Tailoring coursework to the students’ fields of research, and 3. Developing tools and strategies to promote education and dissemination of statistical knowledge. We also provide a list of statistical considerations that should be addressed in statistics education for basic scientists. PMID:27058055

  3. Case complexity scores in congenital heart surgery: a comparative study of the Aristotle Basic Complexity score and the Risk Adjustment in Congenital Heart Surgery (RACHS-1) system.

    PubMed

    Al-Radi, Osman O; Harrell, Frank E; Caldarone, Christopher A; McCrindle, Brian W; Jacobs, Jeffrey P; Williams, M Gail; Van Arsdell, Glen S; Williams, William G

    2007-04-01

    The Aristotle Basic Complexity score and the Risk Adjustment in Congenital Heart Surgery system were developed by consensus to compare outcomes of congenital cardiac surgery. We compared the predictive value of the 2 systems. Of all index congenital cardiac operations at our institution from 1982 to 2004 (n = 13,675), we were able to assign an Aristotle Basic Complexity score, a Risk Adjustment in Congenital Heart Surgery score, and both scores to 13,138 (96%), 11,533 (84%), and 11,438 (84%) operations, respectively. Models of in-hospital mortality and length of stay were generated for Aristotle Basic Complexity and Risk Adjustment in Congenital Heart Surgery using an identical data set in which both Aristotle Basic Complexity and Risk Adjustment in Congenital Heart Surgery scores were assigned. The likelihood ratio test for nested models and paired concordance statistics were used. After adjustment for year of operation, the odds ratios for Aristotle Basic Complexity score 3 versus 6, 9 versus 6, 12 versus 6, and 15 versus 6 were 0.29, 2.22, 7.62, and 26.54 (P < .0001). Similarly, odds ratios for Risk Adjustment in Congenital Heart Surgery categories 1 versus 2, 3 versus 2, 4 versus 2, and 5/6 versus 2 were 0.23, 1.98, 5.80, and 20.71 (P < .0001). Risk Adjustment in Congenital Heart Surgery added significant predictive value over Aristotle Basic Complexity (likelihood ratio chi2 = 162, P < .0001), whereas Aristotle Basic Complexity contributed much less predictive value over Risk Adjustment in Congenital Heart Surgery (likelihood ratio chi2 = 13.4, P = .009). Neither system fully adjusted for the child's age. The Risk Adjustment in Congenital Heart Surgery scores were more concordant with length of stay compared with Aristotle Basic Complexity scores (P < .0001). The predictive value of Risk Adjustment in Congenital Heart Surgery is higher than that of Aristotle Basic Complexity. The use of Aristotle Basic Complexity or Risk Adjustment in Congenital Heart Surgery as risk stratification and trending tools to monitor outcomes over time and to guide risk-adjusted comparisons may be valuable.

  4. Statistical power for the comparative regression discontinuity design with a nonequivalent comparison group.

    PubMed

    Tang, Yang; Cook, Thomas D; Kisbu-Sakarya, Yasemin

    2018-03-01

    In the "sharp" regression discontinuity design (RD), all units scoring on one side of a designated score on an assignment variable receive treatment, whereas those scoring on the other side become controls. Thus the continuous assignment variable and binary treatment indicator are measured on the same scale. Because each must be in the impact model, the resulting multi-collinearity reduces the efficiency of the RD design. However, untreated comparison data can be added along the assignment variable, and a comparative regression discontinuity design (CRD) is then created. When the untreated data come from a non-equivalent comparison group, we call this CRD-CG. Assuming linear functional forms, we show that power in CRD-CG is (a) greater than in basic RD; (b) less sensitive to the location of the cutoff and the distribution of the assignment variable; and that (c) fewer treated units are needed in the basic RD component within the CRD-CG so that savings can result from having fewer treated cases. The theory we develop is used to make numerical predictions about the efficiency of basic RD and CRD-CG relative to each other and to a randomized control trial. Data from the National Head Start Impact study are used to test these predictions. The obtained estimates are closer to the predicted parameters for CRD-CG than for basic RD and are generally quite close to the parameter predictions, supporting the emerging argument that CRD should be the design of choice in many applications for which basic RD is now used. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  5. Predicting stillbirth in a low resource setting.

    PubMed

    Kayode, Gbenga A; Grobbee, Diederick E; Amoakoh-Coleman, Mary; Adeleke, Ibrahim Taiwo; Ansah, Evelyn; de Groot, Joris A H; Klipstein-Grobusch, Kerstin

    2016-09-20

    Stillbirth is a major contributor to perinatal mortality and it is particularly common in low- and middle-income countries, where annually about three million stillbirths occur in the third trimester. This study aims to develop a prediction model for early detection of pregnancies at high risk of stillbirth. This retrospective cohort study examined 6,573 pregnant women who delivered at Federal Medical Centre Bida, a tertiary level of healthcare in Nigeria from January 2010 to December 2013. Descriptive statistics were performed and missing data imputed. Multivariable logistic regression was applied to examine the associations between selected candidate predictors and stillbirth. Discrimination and calibration were used to assess the model's performance. The prediction model was validated internally and over-optimism was corrected. We developed a prediction model for stillbirth that comprised maternal comorbidity, place of residence, maternal occupation, parity, bleeding in pregnancy, and fetal presentation. As a secondary analysis, we extended the model by including fetal growth rate as a predictor, to examine how beneficial ultrasound parameters would be for the predictive performance of the model. After internal validation, both calibration and discriminative performance of both the basic and extended model were excellent (i.e. C-statistic basic model = 0.80 (95 % CI 0.78-0.83) and extended model = 0.82 (95 % CI 0.80-0.83)). We developed a simple but informative prediction model for early detection of pregnancies with a high risk of stillbirth for early intervention in a low resource setting. Future research should focus on external validation of the performance of this promising model.

  6. Optimal Prediction in the Retina and Natural Motion Statistics

    NASA Astrophysics Data System (ADS)

    Salisbury, Jared M.; Palmer, Stephanie E.

    2016-03-01

    Almost all behaviors involve making predictions. Whether an organism is trying to catch prey, avoid predators, or simply move through a complex environment, the organism uses the data it collects through its senses to guide its actions by extracting from these data information about the future state of the world. A key aspect of the prediction problem is that not all features of the past sensory input have predictive power, and representing all features of the external sensory world is prohibitively costly both due to space and metabolic constraints. This leads to the hypothesis that neural systems are optimized for prediction. Here we describe theoretical and computational efforts to define and quantify the efficient representation of the predictive information by the brain. Another important feature of the prediction problem is that the physics of the world is diverse enough to contain a wide range of possible statistical ensembles, yet not all inputs are probable. Thus, the brain might not be a generalized predictive machine; it might have evolved to specifically solve the prediction problems most common in the natural environment. This paper summarizes recent results on predictive coding and optimal predictive information in the retina and suggests approaches for quantifying prediction in response to natural motion. Basic statistics of natural movies reveal that general patterns of spatiotemporal correlation are present across a wide range of scenes, though individual differences in motion type may be important for optimal processing of motion in a given ecological niche.

  7. An image based method for crop yield prediction using remotely sensed and crop canopy data: the case of Paphos district, western Cyprus

    NASA Astrophysics Data System (ADS)

    Papadavid, G.; Hadjimitsis, D.

    2014-08-01

    Remote sensing techniques development have provided the opportunity for optimizing yields in the agricultural procedure and moreover to predict the forthcoming yield. Yield prediction plays a vital role in Agricultural Policy and provides useful data to policy makers. In this context, crop and soil parameters along with NDVI index which are valuable sources of information have been elaborated statistically to test if a) Durum wheat yield can be predicted and b) when is the actual time-window to predict the yield in the district of Paphos, where Durum wheat is the basic cultivation and supports the rural economy of the area. 15 plots cultivated with Durum wheat from the Agricultural Research Institute of Cyprus for research purposes, in the area of interest, have been under observation for three years to derive the necessary data. Statistical and remote sensing techniques were then applied to derive and map a model that can predict yield of Durum wheat in this area. Indeed the semi-empirical model developed for this purpose, with very high correlation coefficient R2=0.886, has shown in practice that can predict yields very good. Students T test has revealed that predicted values and real values of yield have no statistically significant difference. The developed model can and will be further elaborated with more parameters and applied for other crops in the near future.

  8. Systems and methods for knowledge discovery in spatial data

    DOEpatents

    Obradovic, Zoran; Fiez, Timothy E.; Vucetic, Slobodan; Lazarevic, Aleksandar; Pokrajac, Dragoljub; Hoskinson, Reed L.

    2005-03-08

    Systems and methods are provided for knowledge discovery in spatial data as well as to systems and methods for optimizing recipes used in spatial environments such as may be found in precision agriculture. A spatial data analysis and modeling module is provided which allows users to interactively and flexibly analyze and mine spatial data. The spatial data analysis and modeling module applies spatial data mining algorithms through a number of steps. The data loading and generation module obtains or generates spatial data and allows for basic partitioning. The inspection module provides basic statistical analysis. The preprocessing module smoothes and cleans the data and allows for basic manipulation of the data. The partitioning module provides for more advanced data partitioning. The prediction module applies regression and classification algorithms on the spatial data. The integration module enhances prediction methods by combining and integrating models. The recommendation module provides the user with site-specific recommendations as to how to optimize a recipe for a spatial environment such as a fertilizer recipe for an agricultural field.

  9. Estimating Janka hardness from specific gravity for tropical and temperate species

    Treesearch

    Michael C. Wiemann; David W. Green

    2007-01-01

    Using mean values for basic (green) specific gravity and Janka side hardness for individual species obtained from the world literature, regression equations were developed to predict side hardness from specific gravity. Statistical and graphical methods showed that the hardness–specific gravity relationship is the same for tropical and temperate hardwoods, but that the...

  10. An exploratory investigation of weight estimation techniques for hypersonic flight vehicles

    NASA Technical Reports Server (NTRS)

    Cook, E. L.

    1981-01-01

    The three basic methods of weight prediction (fixed-fraction, statistical correlation, and point stress analysis) and some of the computer programs that have been developed to implement them are discussed. A modified version of the WAATS (Weights Analysis of Advanced Transportation Systems) program is presented, along with input data forms and an example problem.

  11. A basic need theory approach to problematic Internet use and the mediating effect of psychological distress

    PubMed Central

    Wong, Ting Yat; Yuen, Kenneth S. L.; Li, Wang On

    2015-01-01

    The Internet provides an easily accessible way to meet certain needs. Over-reliance on it leads to problematic use, which studies show can be predicted by psychological distress. Self-determination theory proposes that we all have the basic need for autonomy, competency, and relatedness. This has been shown to explain the motivations behind problematic Internet use. This study hypothesizes that individuals who are psychologically disturbed because their basic needs are not being met are more vulnerable to becoming reliant on the Internet when they seek such needs satisfaction from online activities, and tests a model in which basic needs predict problematic Internet use, fully mediated by psychological distress. Problematic Internet use, psychological distress, and basic needs satisfaction were psychometrically measured in a sample of 229 Hong Kong University students and structural equation modeling was used to test the hypothesized model. All indices showed the model has a good fit. Further, statistical testing supported a mediation effect for psychological distress between needs satisfaction and problematic Internet use. The results extend our understanding of the development and prevention of problematic Internet use based on the framework of self-determination theory. Psychological distress could be used as an early predictor, while preventing and treating problematic Internet use should emphasize the fulfillment of unmet needs. PMID:25642201

  12. A basic need theory approach to problematic Internet use and the mediating effect of psychological distress.

    PubMed

    Wong, Ting Yat; Yuen, Kenneth S L; Li, Wang On

    2014-01-01

    The Internet provides an easily accessible way to meet certain needs. Over-reliance on it leads to problematic use, which studies show can be predicted by psychological distress. Self-determination theory proposes that we all have the basic need for autonomy, competency, and relatedness. This has been shown to explain the motivations behind problematic Internet use. This study hypothesizes that individuals who are psychologically disturbed because their basic needs are not being met are more vulnerable to becoming reliant on the Internet when they seek such needs satisfaction from online activities, and tests a model in which basic needs predict problematic Internet use, fully mediated by psychological distress. Problematic Internet use, psychological distress, and basic needs satisfaction were psychometrically measured in a sample of 229 Hong Kong University students and structural equation modeling was used to test the hypothesized model. All indices showed the model has a good fit. Further, statistical testing supported a mediation effect for psychological distress between needs satisfaction and problematic Internet use. The results extend our understanding of the development and prevention of problematic Internet use based on the framework of self-determination theory. Psychological distress could be used as an early predictor, while preventing and treating problematic Internet use should emphasize the fulfillment of unmet needs.

  13. Bayesian truthing and experimental validation in homeland security and defense

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Forrester, Thomas; Wang, Wenjian; Kostrzewski, Andrew; Pradhan, Ranjit

    2014-05-01

    In this paper we discuss relations between Bayesian Truthing (experimental validation), Bayesian statistics, and Binary Sensing in the context of selected Homeland Security and Intelligence, Surveillance, Reconnaissance (ISR) optical and nonoptical application scenarios. The basic Figure of Merit (FoM) is Positive Predictive Value (PPV), as well as false positives and false negatives. By using these simple binary statistics, we can analyze, classify, and evaluate a broad variety of events including: ISR; natural disasters; QC; and terrorism-related, GIS-related, law enforcement-related, and other C3I events.

  14. Heterogeneous Structure of Stem Cells Dynamics: Statistical Models and Quantitative Predictions

    PubMed Central

    Bogdan, Paul; Deasy, Bridget M.; Gharaibeh, Burhan; Roehrs, Timo; Marculescu, Radu

    2014-01-01

    Understanding stem cell (SC) population dynamics is essential for developing models that can be used in basic science and medicine, to aid in predicting cells fate. These models can be used as tools e.g. in studying patho-physiological events at the cellular and tissue level, predicting (mal)functions along the developmental course, and personalized regenerative medicine. Using time-lapsed imaging and statistical tools, we show that the dynamics of SC populations involve a heterogeneous structure consisting of multiple sub-population behaviors. Using non-Gaussian statistical approaches, we identify the co-existence of fast and slow dividing subpopulations, and quiescent cells, in stem cells from three species. The mathematical analysis also shows that, instead of developing independently, SCs exhibit a time-dependent fractal behavior as they interact with each other through molecular and tactile signals. These findings suggest that more sophisticated models of SC dynamics should view SC populations as a collective and avoid the simplifying homogeneity assumption by accounting for the presence of more than one dividing sub-population, and their multi-fractal characteristics. PMID:24769917

  15. Evidence-based pathology in its second decade: toward probabilistic cognitive computing.

    PubMed

    Marchevsky, Alberto M; Walts, Ann E; Wick, Mark R

    2017-03-01

    Evidence-based pathology advocates using a combination of best available data ("evidence") from the literature and personal experience for the diagnosis, estimation of prognosis, and assessment of other variables that impact individual patient care. Evidence-based pathology relies on systematic reviews of the literature, evaluation of the quality of evidence as categorized by evidence levels and statistical tools such as meta-analyses, estimates of probabilities and odds, and others. However, it is well known that previously "statistically significant" information usually does not accurately forecast the future for individual patients. There is great interest in "cognitive computing" in which "data mining" is combined with "predictive analytics" designed to forecast future events and estimate the strength of those predictions. This study demonstrates the use of IBM Watson Analytics software to evaluate and predict the prognosis of 101 patients with typical and atypical pulmonary carcinoid tumors in which Ki-67 indices have been determined. The results obtained with this system are compared with those previously reported using "routine" statistical software and the help of a professional statistician. IBM Watson Analytics interactively provides statistical results that are comparable to those obtained with routine statistical tools but much more rapidly, with considerably less effort and with interactive graphics that are intuitively easy to apply. It also enables analysis of natural language variables and yields detailed survival predictions for patient subgroups selected by the user. Potential applications of this tool and basic concepts of cognitive computing are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Comparison of Basic Science Knowledge Between DO and MD Students.

    PubMed

    Davis, Glenn E; Gayer, Gregory G

    2017-02-01

    With the coming single accreditation system for graduate medical education, medical educators may wonder whether knowledge in basic sciences is equivalent for osteopathic and allopathic medical students. To examine whether medical students' basic science knowledge is the same among osteopathic and allopathic medical students. A dataset of the Touro University College of Osteopathic Medicine-CA student records from the classes of 2013, 2014, and 2015 and the national cohort of National Board of Medical Examiners Comprehensive Basic Science Examination (NBME-CBSE) parameters for MD students were used. Models of the Comprehensive Osteopathic Medical Licensing Examination-USA (COMLEX-USA) Level 1 scores were fit using linear and logistic regression. The models included variables used in both osteopathic and allopathic medical professions to predict COMLEX-USA outcomes, such as Medical College Admission Test biology scores, preclinical grade point average, number of undergraduate science units, and scores on the NBME-CBSE. Regression statistics were studied to compare the effectiveness of models that included or excluded NBME-CBSE scores at predicting COMLEX-USA Level 1 scores. Variance inflation factor was used to investigate multicollinearity. Receiver operating characteristic curves were used to show the effectiveness of NBME-CBSE scores at predicting COMLEX-USA Level 1 pass/fail outcomes. A t test at 99% level was used to compare mean NBME-CBSE scores with the national cohort. A total of 390 student records were analyzed. Scores on the NBME-CBSE were found to be an effective predictor of COMLEX-USA Level 1 scores (P<.001). The pass/fail outcome on COMLEX-USA Level 1 was also well predicted by NBME-CBSE scores (P<.001). No significant difference was found in performance on the NBME-CBSE between osteopathic and allopathic medical students (P=.322). As an examination constructed to assess the basic science knowledge of allopathic medical students, the NBME-CBSE is effective at predicting performance on COMLEX-USA Level 1. In addition, osteopathic medical students performed the same as allopathic medical students on the NBME-CBSE. The results imply that the same basic science knowledge is expected for DO and MD students.

  17. Reflexion on linear regression trip production modelling method for ensuring good model quality

    NASA Astrophysics Data System (ADS)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  18. Have Basic Mathematical Skills Grown Obsolete in the Computer Age: Assessing Basic Mathematical Skills and Forecasting Performance in a Business Statistics Course

    ERIC Educational Resources Information Center

    Noser, Thomas C.; Tanner, John R.; Shah, Situl

    2008-01-01

    The purpose of this study was to measure the comprehension of basic mathematical skills of students enrolled in statistics classes at a large regional university, and to determine if the scores earned on a basic math skills test are useful in forecasting student performance in these statistics classes, and to determine if students' basic math…

  19. What's in a Teacher Test? Assessing the Relationship between Teacher Test Scores and Student Secondary STEM Achievement. CEDR Working Paper. WP #2016-4

    ERIC Educational Resources Information Center

    Goldhaber, Dan; Gratz, Trevor; Theobald, Roddy

    2016-01-01

    We investigate the predictive validity of teacher credential test scores for student performance in secondary STEM classrooms in Washington state. After replicating earlier findings that teacher basic skills licensure test scores are a modest and statistically significant predictor of student math test score gains in elementary grades, we focus on…

  20. Antibiotics in Animal Products

    NASA Astrophysics Data System (ADS)

    Falcão, Amílcar C.

    The administration of antibiotics to animals to prevent or treat diseases led us to be concerned about the impact of these antibiotics on human health. In fact, animal products could be a potential vehicle to transfer drugs to humans. Using appropri ated mathematical and statistical models, one can predict the kinetic profile of drugs and their metabolites and, consequently, develop preventive procedures regarding drug transmission (i.e., determination of appropriate withdrawal periods). Nevertheless, in the present chapter the mathematical and statistical concepts for data interpretation are strictly given to allow understanding of some basic pharma-cokinetic principles and to illustrate the determination of withdrawal periods

  1. A preliminary analysis of library holdings as compared to the basic resources for pharmacy education list.

    PubMed

    Vaughan, K T L V; Lerner, Rachel C

    2013-01-01

    The catalogs of 11 university libraries were analyzed against the Basic Resources for Pharmaceutical Education (BRPE) to measure the percent coverage of the core total list as well as the core sublist. There is no clear trend in this data to link school age, size, or rank with percentage of coverage of the total list or the "First Purchase" core list when treated as independent variables. Approximately half of the schools have significantly higher percentages of core titles than statistically expected. Based on this data, it is difficult to predict what percentage of titles on the BRPE a library will contain.

  2. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  3. A Predictive Approach to Network Reverse-Engineering

    NASA Astrophysics Data System (ADS)

    Wiggins, Chris

    2005-03-01

    A central challenge of systems biology is the ``reverse engineering" of transcriptional networks: inferring which genes exert regulatory control over which other genes. Attempting such inference at the genomic scale has only recently become feasible, via data-intensive biological innovations such as DNA microrrays (``DNA chips") and the sequencing of whole genomes. In this talk we present a predictive approach to network reverse-engineering, in which we integrate DNA chip data and sequence data to build a model of the transcriptional network of the yeast S. cerevisiae capable of predicting the response of genes in unseen experiments. The technique can also be used to extract ``motifs,'' sequence elements which act as binding sites for regulatory proteins. We validate by a number of approaches and present comparison of theoretical prediction vs. experimental data, along with biological interpretations of the resulting model. En route, we will illustrate some basic notions in statistical learning theory (fitting vs. over-fitting; cross- validation; assessing statistical significance), highlighting ways in which physicists can make a unique contribution in data- driven approaches to reverse engineering.

  4. Measures of accuracy and performance of diagnostic tests.

    PubMed

    Drobatz, Kenneth J

    2009-05-01

    Diagnostic tests are integral to the practice of veterinary cardiology, any other specialty, and general veterinary medicine. Developing and understanding diagnostic tests is one of the cornerstones of clinical research. This manuscript describes the diagnostic test properties including sensitivity, specificity, predictive value, likelihood ratio, receiver operating characteristic curve. Review of practical book chapters and standard statistics manuscripts. Diagnostics such as sensitivity, specificity, predictive value, likelihood ratio, and receiver operating characteristic curve are described and illustrated. Basic understanding of how diagnostic tests are developed and interpreted is essential in reviewing clinical scientific papers and understanding evidence based medicine.

  5. Predicting Macroscale Effects Through Nanoscale Features

    DTIC Science & Technology

    2012-01-01

    errors become incorrectly computed by the basic OLS technique. To test for the presence of heteroscedasticity the Breusch - Pagan / Cook-Weisberg test ...is employed with the test statistics distributed as 2 with the degrees of freedom equal to the number of regressors. The Breusch - Pagan / Cook...between shock sensitivity and Sm does not exhibit any heteroscedasticity. The Breusch - Pagan / Cook-Weisberg test provides 2(1)=1.73, which

  6. Statistical Extremes of Turbulence and a Cascade Generalisation of Euler's Gyroscope Equation

    NASA Astrophysics Data System (ADS)

    Tchiguirinskaia, Ioulia; Scherzer, Daniel

    2016-04-01

    Turbulence refers to a rather well defined hydrodynamical phenomenon uncovered by Reynolds. Nowadays, the word turbulence is used to designate the loss of order in many different geophysical fields and the related fundamental extreme variability of environmental data over a wide range of scales. Classical statistical techniques for estimating the extremes, being largely limited to statistical distributions, do not take into account the mechanisms generating such extreme variability. An alternative approaches to nonlinear variability are based on a fundamental property of the non-linear equations: scale invariance, which means that these equations are formally invariant under given scale transforms. Its specific framework is that of multifractals. In this framework extreme variability builds up scale by scale leading to non-classical statistics. Although multifractals are increasingly understood as a basic framework for handling such variability, there is still a gap between their potential and their actual use. In this presentation we discuss how to dealt with highly theoretical problems of mathematical physics together with a wide range of geophysical applications. We use Euler's gyroscope equation as a basic element in constructing a complex deterministic system that preserves not only the scale symmetry of the Navier-Stokes equations, but some more of their symmetries. Euler's equation has been not only the object of many theoretical investigations of the gyroscope device, but also generalised enough to become the basic equation of fluid mechanics. Therefore, there is no surprise that a cascade generalisation of this equation can be used to characterise the intermittency of turbulence, to better understand the links between the multifractal exponents and the structure of a simplified, but not simplistic, version of the Navier-Stokes equations. In a given way, this approach is similar to that of Lorenz, who studied how the flap of a butterfly wing could generate a cyclone with the help of a 3D ordinary differential system. Being well supported by the extensive numerical results, the cascade generalisation of Euler's gyroscope equation opens new horizons for predictability and predictions of processes having long-range dependences.

  7. Computational Prediction of Shock Ignition Thresholds and Ignition Probability of Polymer-Bonded Explosives

    NASA Astrophysics Data System (ADS)

    Wei, Yaochi; Kim, Seokpum; Horie, Yasuyuki; Zhou, Min

    2017-06-01

    A computational approach is developed to predict the probabilistic ignition thresholds of polymer-bonded explosives (PBXs). The simulations explicitly account for microstructure, constituent properties, and interfacial responses and capture processes responsible for the development of hotspots and damage. The specific damage mechanisms considered include viscoelasticity, viscoplasticity, fracture, post-fracture contact, frictional heating, and heat conduction. The probabilistic analysis uses sets of statistically similar microstructure samples to mimic relevant experiments for statistical variations of material behavior due to inherent material heterogeneities. The ignition thresholds and corresponding ignition probability maps are predicted for PBX 9404 and PBX 9501 for the impact loading regime of Up = 200 --1200 m/s. James and Walker-Wasley relations are utilized to establish explicit analytical expressions for the ignition probability as a function of load intensities. The predicted results are in good agreement with available experimental measurements. The capability to computationally predict the macroscopic response out of material microstructures and basic constituent properties lends itself to the design of new materials and the analysis of existing materials. The authors gratefully acknowledge the support from Air Force Office of Scientific Research (AFOSR) and the Defense Threat Reduction Agency (DTRA).

  8. Non-abelian anyons and topological quantum information processing in 1D wire networks

    NASA Astrophysics Data System (ADS)

    Alicea, Jason

    2012-02-01

    Topological quantum computation provides an elegant solution to decoherence, circumventing this infamous problem at the hardware level. The most basic requirement in this approach is the ability to stabilize and manipulate particles exhibiting non-Abelian exchange statistics -- Majorana fermions being the simplest example. Curiously, Majorana fermions have been predicted to arise both in 2D systems, where non-Abelian statistics is well established, and in 1D, where exchange statistics of any type is ill-defined. An important question then arises: do Majorana fermions in 1D hold the same technological promise as their 2D counterparts? In this talk I will answer this question in the affirmative, describing how one can indeed manipulate and harness the non-Abelian statistics of Majoranas in a remarkably simple fashion using networks formed by quantum wires or topological insulator edges.

  9. A Quantile Regression Approach to Understanding the Relations Between Morphological Awareness, Vocabulary, and Reading Comprehension in Adult Basic Education Students

    PubMed Central

    Tighe, Elizabeth L.; Schatschneider, Christopher

    2015-01-01

    The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in Adult Basic Education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological awareness and vocabulary knowledge at multiple points (quantiles) along the continuous distribution of reading comprehension. To demonstrate the efficacy of our multiple quantile regression analysis, we compared and contrasted our results with a traditional multiple regression analytic approach. Our results indicated that morphological awareness and vocabulary knowledge accounted for a large portion of the variance (82-95%) in reading comprehension skills across all quantiles. Morphological awareness exhibited the greatest unique predictive ability at lower levels of reading comprehension whereas vocabulary knowledge exhibited the greatest unique predictive ability at higher levels of reading comprehension. These results indicate the utility of using multiple quantile regression to assess trajectories of component skills across multiple levels of reading comprehension. The implications of our findings for ABE programs are discussed. PMID:25351773

  10. Interpretation of correlations in clinical research.

    PubMed

    Hung, Man; Bounsanga, Jerry; Voss, Maren Wright

    2017-11-01

    Critically analyzing research is a key skill in evidence-based practice and requires knowledge of research methods, results interpretation, and applications, all of which rely on a foundation based in statistics. Evidence-based practice makes high demands on trained medical professionals to interpret an ever-expanding array of research evidence. As clinical training emphasizes medical care rather than statistics, it is useful to review the basics of statistical methods and what they mean for interpreting clinical studies. We reviewed the basic concepts of correlational associations, violations of normality, unobserved variable bias, sample size, and alpha inflation. The foundations of causal inference were discussed and sound statistical analyses were examined. We discuss four ways in which correlational analysis is misused, including causal inference overreach, over-reliance on significance, alpha inflation, and sample size bias. Recent published studies in the medical field provide evidence of causal assertion overreach drawn from correlational findings. The findings present a primer on the assumptions and nature of correlational methods of analysis and urge clinicians to exercise appropriate caution as they critically analyze the evidence before them and evaluate evidence that supports practice. Critically analyzing new evidence requires statistical knowledge in addition to clinical knowledge. Studies can overstate relationships, expressing causal assertions when only correlational evidence is available. Failure to account for the effect of sample size in the analyses tends to overstate the importance of predictive variables. It is important not to overemphasize the statistical significance without consideration of effect size and whether differences could be considered clinically meaningful.

  11. First trimester prediction of maternal glycemic status.

    PubMed

    Gabbay-Benziv, Rinat; Doyle, Lauren E; Blitzer, Miriam; Baschat, Ahmet A

    2015-05-01

    To predict gestational diabetes mellitus (GDM) or normoglycemic status using first trimester maternal characteristics. We used data from a prospective cohort study. First trimester maternal characteristics were compared between women with and without GDM. Association of these variables with sugar values at glucose challenge test (GCT) and subsequent GDM was tested to identify key parameters. A predictive algorithm for GDM was developed and receiver operating characteristics (ROC) statistics was used to derive the optimal risk score. We defined normoglycemic state, when GCT and all four sugar values at oral glucose tolerance test, whenever obtained, were normal. Using same statistical approach, we developed an algorithm to predict the normoglycemic state. Maternal age, race, prior GDM, first trimester BMI, and systolic blood pressure (SBP) were all significantly associated with GDM. Age, BMI, and SBP were also associated with GCT values. The logistic regression analysis constructed equation and the calculated risk score yielded sensitivity, specificity, positive predictive value, and negative predictive value of 85%, 62%, 13.8%, and 98.3% for a cut-off value of 0.042, respectively (ROC-AUC - area under the curve 0.819, CI - confidence interval 0.769-0.868). The model constructed for normoglycemia prediction demonstrated lower performance (ROC-AUC 0.707, CI 0.668-0.746). GDM prediction can be achieved during the first trimester encounter by integration of maternal characteristics and basic measurements while normoglycemic status prediction is less effective.

  12. Using chick forebrain neurons to model neurodegeneration and protection in an undergraduate neuroscience laboratory course.

    PubMed

    Burdo, Joseph R

    2013-01-01

    Since 2009 at Boston College, we have been offering a Research in Neuroscience course using cultured neurons in an in vitro model of stroke. The students work in groups to learn how to perform sterile animal cell culture and run several basic bioassays to assess cell viability. They are then tasked with analyzing the scientific literature in an attempt to identify and predict the intracellular pathways involved in neuronal death, and identify dietary antioxidant compounds that may provide protection based on their known effects in other cells. After each group constructs a hypothesis pertaining to the potential neuroprotection, we purchase one compound per group and the students test their hypotheses using a commonly performed viability assay. The groups generate quantitative data and perform basic statistics on that data to analyze it for statistical significance. Finally, the groups compile their data and other elements of their research experience into a poster for our departmental research celebration at the end of the spring semester.

  13. Using Chick Forebrain Neurons to Model Neurodegeneration and Protection in an Undergraduate Neuroscience Laboratory Course

    PubMed Central

    Burdo, Joseph R.

    2013-01-01

    Since 2009 at Boston College, we have been offering a Research in Neuroscience course using cultured neurons in an in vitro model of stroke. The students work in groups to learn how to perform sterile animal cell culture and run several basic bioassays to assess cell viability. They are then tasked with analyzing the scientific literature in an attempt to identify and predict the intracellular pathways involved in neuronal death, and identify dietary antioxidant compounds that may provide protection based on their known effects in other cells. After each group constructs a hypothesis pertaining to the potential neuroprotection, we purchase one compound per group and the students test their hypotheses using a commonly performed viability assay. The groups generate quantitative data and perform basic statistics on that data to analyze it for statistical significance. Finally, the groups compile their data and other elements of their research experience into a poster for our departmental research celebration at the end of the spring semester. PMID:23805059

  14. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  15. All individuals are not created equal; accounting for interindividual variation in fitting life-history responses to toxicants.

    PubMed

    Jager, Tjalling

    2013-02-05

    The individuals of a species are not equal. These differences frustrate experimental biologists and ecotoxicologists who wish to study the response of a species (in general) to a treatment. In the analysis of data, differences between model predictions and observations on individual animals are usually treated as random measurement error around the true response. These deviations, however, are mainly caused by real differences between the individuals (e.g., differences in physiology and in initial conditions). Understanding these intraspecies differences, and accounting for them in the data analysis, will improve our understanding of the response to the treatment we are investigating and allow for a more powerful, less biased, statistical analysis. Here, I explore a basic scheme for statistical inference to estimate parameters governing stress that allows individuals to differ in their basic physiology. This scheme is illustrated using a simple toxicokinetic-toxicodynamic model and a data set for growth of the springtail Folsomia candida exposed to cadmium in food. This article should be seen as proof of concept; a first step in bringing more realism into the statistical inference for process-based models in ecotoxicology.

  16. Prognostic Indexes for Brain Metastases: Which Is the Most Powerful?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arruda Viani, Gustavo, E-mail: gusviani@gmail.com; Bernardes da Silva, Lucas Godoi; Stefano, Eduardo Jose

    Purpose: The purpose of the present study was to compare the prognostic indexes (PIs) of patients with brain metastases (BMs) treated with whole brain radiotherapy (WBRT) using an artificial neural network. This analysis is important, because it evaluates the prognostic power of each PI to guide clinical decision-making and outcomes research. Methods and Materials: A retrospective prognostic study was conducted of 412 patients with BMs who underwent WBRT between April 1998 and March 2010. The eligibility criteria for patients included having undergone WBRT or WBRT plus neurosurgery. The data were analyzed using the artificial neural network. The input neural datamore » consisted of all prognostic factors included in the 5 PIs (recursive partitioning analysis, graded prognostic assessment [GPA], basic score for BMs, Rotterdam score, and Germany score). The data set was randomly divided into 300 training and 112 testing examples for survival prediction. All 5 PIs were compared using our database of 412 patients with BMs. The sensibility of the 5 indexes to predict survival according to their input variables was determined statistically using receiver operating characteristic curves. The importance of each variable from each PI was subsequently evaluated. Results: The overall 1-, 2-, and 3-year survival rate was 22%, 10.2%, and 5.1%, respectively. All classes of PIs were significantly associated with survival (recursive partitioning analysis, P < .0001; GPA, P < .0001; basic score for BMs, P = .002; Rotterdam score, P = .001; and Germany score, P < .0001). Comparing the areas under the curves, the GPA was statistically most sensitive in predicting survival (GPA, 86%; recursive partitioning analysis, 81%; basic score for BMs, 79%; Rotterdam, 73%; and Germany score, 77%; P < .001). Among the variables included in each PI, the performance status and presence of extracranial metastases were the most important factors. Conclusion: A variety of prognostic models describe the survival of patients with BMs to a more or less satisfactory degree. Among the 5 PIs evaluated in the present study, GPA was the most powerful in predicting survival. Additional studies should include emerging biologic prognostic factors to improve the sensibility of these PIs.« less

  17. Role of socioeconomic status measures in long-term mortality risk prediction after myocardial infarction.

    PubMed

    Molshatzki, Noa; Drory, Yaacov; Myers, Vicki; Goldbourt, Uri; Benyamini, Yael; Steinberg, David M; Gerber, Yariv

    2011-07-01

    The relationship of risk factors to outcomes has traditionally been assessed by measures of association such as odds ratio or hazard ratio and their statistical significance from an adjusted model. However, a strong, highly significant association does not guarantee a gain in stratification capacity. Using recently developed model performance indices, we evaluated the incremental discriminatory power of individual and neighborhood socioeconomic status (SES) measures after myocardial infarction (MI). Consecutive patients aged ≤65 years (N=1178) discharged from 8 hospitals in central Israel after incident MI in 1992 to 1993 were followed-up through 2005. A basic model (demographic variables, traditional cardiovascular risk factors, and disease severity indicators) was compared with an extended model including SES measures (education, income, employment, living with a steady partner, and neighborhood SES) in terms of Harrell c statistic, integrated discrimination improvement (IDI), and net reclassification improvement (NRI). During the 13-year follow-up, 326 (28%) patients died. Cox proportional hazards models showed that all SES measures were significantly and independently associated with mortality. Furthermore, compared with the basic model, the extended model yielded substantial gains (all P<0.001) in c statistic (0.723 to 0.757), NRI (15.2%), IDI (5.9%), and relative IDI (32%). Improvement was observed both for sensitivity (classification of events) and specificity (classification of nonevents). This study illustrates the additional insights that can be gained from considering the IDI and NRI measures of model performance and suggests that, among community patients with incident MI, incorporating SES measures into a clinical-based model substantially improves long-term mortality risk prediction.

  18. Urological research in sub-Saharan Africa: a retrospective cohort study of abstracts presented at the Nigerian Association of Urological Surgeons conferences.

    PubMed

    Bello, Jibril Oyekunle

    2013-11-14

    Nigeria is one of the top three countries in Africa in terms of science research output and Nigerian urologists' biomedical research output contributes to this. Each year, urologists in Nigeria gather to present their recent research at the conference of the Nigerian Association of Urological Surgeons (NAUS). These abstracts are not thoroughly vetted as are full length manuscripts published in peer reviewed journals but the information they disseminate may affect clinical practice of attendees. This study aims to describe the characteristics of abstracts presented at the annual conferences of NAUS, the quality of the abstracts as determined by the subsequent publication of full length manuscripts in peer-review indexed journals and the factors that influence such successful publication. Abstracts presented at the 2007 to 2010 NAUS conferences were identified through conference abstracts books. Using a strict search protocol, publication in peer-reviewed journals was determined. The abstracts characteristics were analyzed and their quality judged by subsequent successful publishing of full length manuscripts. Statistical analysis was performed using SPSS 16.0 software to determine factors predictive of successful publication. Only 75 abstracts were presented at the NAUS 2007 to 2010 conferences; a quarter (24%) of the presented abstracts was subsequently published as full length manuscripts. Median time to publication was 15 months (range 2-40 months). Manuscripts whose result data were analyzed with 'beyond basic' statistics of frequencies and averages were more likely to be published than those with basic or no statistics. Quality of the abstracts and thus subsequent publication success is influenced by the use of 'beyond basic' statistics in analysis of the result data presented. There is a need for improvement in the quality of urological research from Nigeria.

  19. Seeing number using texture: How summary statistics account for reductions in perceived numerosity in the visual periphery.

    PubMed

    Balas, Benjamin

    2016-11-01

    Peripheral visual perception is characterized by reduced information about appearance due to constraints on how image structure is represented. Visual crowding is a consequence of excessive integration in the visual periphery. Basic phenomenology of visual crowding and other tasks have been successfully accounted for by a summary-statistic model of pooling, suggesting that texture-like processing is useful for how information is reduced in peripheral vision. I attempt to extend the scope of this model by examining a property of peripheral vision: reduced perceived numerosity in the periphery. I demonstrate that a summary-statistic model of peripheral appearance accounts for reduced numerosity in peripherally viewed arrays of randomly placed dots, but does not account for observed effects of dot clustering within such arrays. The model thus offers a limited account of how numerosity is perceived in the visual periphery. I also demonstrate that the model predicts that numerosity estimation is sensitive to element shape, which represents a novel prediction regarding the phenomenology of peripheral numerosity perception. Finally, I discuss ways to extend the model to a broader range of behavior and the potential for using the model to make further predictions about how number is perceived in untested scenarios in peripheral vision.

  20. The role of the airline transportation network in the prediction and predictability of global epidemics.

    PubMed

    Colizza, Vittoria; Barrat, Alain; Barthélemy, Marc; Vespignani, Alessandro

    2006-02-14

    The systematic study of large-scale networks has unveiled the ubiquitous presence of connectivity patterns characterized by large-scale heterogeneities and unbounded statistical fluctuations. These features affect dramatically the behavior of the diffusion processes occurring on networks, determining the ensuing statistical properties of their evolution pattern and dynamics. In this article, we present a stochastic computational framework for the forecast of global epidemics that considers the complete worldwide air travel infrastructure complemented with census population data. We address two basic issues in global epidemic modeling: (i) we study the role of the large scale properties of the airline transportation network in determining the global diffusion pattern of emerging diseases; and (ii) we evaluate the reliability of forecasts and outbreak scenarios with respect to the intrinsic stochasticity of disease transmission and traffic flows. To address these issues we define a set of quantitative measures able to characterize the level of heterogeneity and predictability of the epidemic pattern. These measures may be used for the analysis of containment policies and epidemic risk assessment.

  1. Regional yield predictions of malting barley by remote sensing and ancillary data

    NASA Astrophysics Data System (ADS)

    Weissteiner, Christof J.; Braun, Matthias; Kuehbauch, Walter

    2004-02-01

    Yield forecasts are of high interest to the malting and brewing industry in order to allow the most convenient purchasing policy of raw materials. Within this investigation, malting barley yield forecasts (Hordeum vulgare L.) were performed for typical growing regions in South-Western Germany. Multisensoral and multitemporal Remote Sensing data on one hand and ancillary meteorological, agrostatistical, topographical and pedological data on the other hand were used as input data for prediction models, which were based on an empirical-statistical modeling approach. Since spring barley production is depending on acreage and on the yield per area, classification is needed, which was performed by a supervised multitemporal classification algorithm, utilizing optical Remote Sensing data (LANDSAT TM/ETM+). Comparison between a pixel-based and an object-oriented classification algorithm was carried out. The basic version of the yield estimation model was conducted by means of linear correlation of Remote Sensing data (NOAA-AVHRR NDVI), CORINE land cover data and agrostatistical data. In an extended version meteorological data (temperature, precipitation, etc.) and soil data was incorporated. Both, basic and extended prediction systems, led to feasible results, depending on the selection of the time span for NDVI accumulation.

  2. A Quantile Regression Approach to Understanding the Relations Among Morphological Awareness, Vocabulary, and Reading Comprehension in Adult Basic Education Students.

    PubMed

    Tighe, Elizabeth L; Schatschneider, Christopher

    2016-07-01

    The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in adult basic education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological awareness and vocabulary knowledge at multiple points (quantiles) along the continuous distribution of reading comprehension. To demonstrate the efficacy of our multiple quantile regression analysis, we compared and contrasted our results with a traditional multiple regression analytic approach. Our results indicated that morphological awareness and vocabulary knowledge accounted for a large portion of the variance (82%-95%) in reading comprehension skills across all quantiles. Morphological awareness exhibited the greatest unique predictive ability at lower levels of reading comprehension whereas vocabulary knowledge exhibited the greatest unique predictive ability at higher levels of reading comprehension. These results indicate the utility of using multiple quantile regression to assess trajectories of component skills across multiple levels of reading comprehension. The implications of our findings for ABE programs are discussed. © Hammill Institute on Disabilities 2014.

  3. Predicting Cortical Dark/Bright Asymmetries from Natural Image Statistics and Early Visual Transforms

    PubMed Central

    Cooper, Emily A.; Norcia, Anthony M.

    2015-01-01

    The nervous system has evolved in an environment with structure and predictability. One of the ubiquitous principles of sensory systems is the creation of circuits that capitalize on this predictability. Previous work has identified predictable non-uniformities in the distributions of basic visual features in natural images that are relevant to the encoding tasks of the visual system. Here, we report that the well-established statistical distributions of visual features -- such as visual contrast, spatial scale, and depth -- differ between bright and dark image components. Following this analysis, we go on to trace how these differences in natural images translate into different patterns of cortical input that arise from the separate bright (ON) and dark (OFF) pathways originating in the retina. We use models of these early visual pathways to transform natural images into statistical patterns of cortical input. The models include the receptive fields and non-linear response properties of the magnocellular (M) and parvocellular (P) pathways, with their ON and OFF pathway divisions. The results indicate that there are regularities in visual cortical input beyond those that have previously been appreciated from the direct analysis of natural images. In particular, several dark/bright asymmetries provide a potential account for recently discovered asymmetries in how the brain processes visual features, such as violations of classic energy-type models. On the basis of our analysis, we expect that the dark/bright dichotomy in natural images plays a key role in the generation of both cortical and perceptual asymmetries. PMID:26020624

  4. Walking through the statistical black boxes of plant breeding.

    PubMed

    Xavier, Alencar; Muir, William M; Craig, Bruce; Rainey, Katy Martin

    2016-10-01

    The main statistical procedures in plant breeding are based on Gaussian process and can be computed through mixed linear models. Intelligent decision making relies on our ability to extract useful information from data to help us achieve our goals more efficiently. Many plant breeders and geneticists perform statistical analyses without understanding the underlying assumptions of the methods or their strengths and pitfalls. In other words, they treat these statistical methods (software and programs) like black boxes. Black boxes represent complex pieces of machinery with contents that are not fully understood by the user. The user sees the inputs and outputs without knowing how the outputs are generated. By providing a general background on statistical methodologies, this review aims (1) to introduce basic concepts of machine learning and its applications to plant breeding; (2) to link classical selection theory to current statistical approaches; (3) to show how to solve mixed models and extend their application to pedigree-based and genomic-based prediction; and (4) to clarify how the algorithms of genome-wide association studies work, including their assumptions and limitations.

  5. Information trimming: Sufficient statistics, mutual information, and predictability from effective channel states

    NASA Astrophysics Data System (ADS)

    James, Ryan G.; Mahoney, John R.; Crutchfield, James P.

    2017-06-01

    One of the most basic characterizations of the relationship between two random variables, X and Y , is the value of their mutual information. Unfortunately, calculating it analytically and estimating it empirically are often stymied by the extremely large dimension of the variables. One might hope to replace such a high-dimensional variable by a smaller one that preserves its relationship with the other. It is well known that either X (or Y ) can be replaced by its minimal sufficient statistic about Y (or X ) while preserving the mutual information. While intuitively reasonable, it is not obvious or straightforward that both variables can be replaced simultaneously. We demonstrate that this is in fact possible: the information X 's minimal sufficient statistic preserves about Y is exactly the information that Y 's minimal sufficient statistic preserves about X . We call this procedure information trimming. As an important corollary, we consider the case where one variable is a stochastic process' past and the other its future. In this case, the mutual information is the channel transmission rate between the channel's effective states. That is, the past-future mutual information (the excess entropy) is the amount of information about the future that can be predicted using the past. Translating our result about minimal sufficient statistics, this is equivalent to the mutual information between the forward- and reverse-time causal states of computational mechanics. We close by discussing multivariate extensions to this use of minimal sufficient statistics.

  6. The relevance of basic sciences in undergraduate medical education.

    PubMed

    Lynch, C; Grant, T; McLoughlin, P; Last, J

    2016-02-01

    Evolving and changing undergraduate medical curricula raise concerns that there will no longer be a place for basic sciences. National and international trends show that 5-year programmes with a pre-requisite for school chemistry are growing more prevalent. National reports in Ireland show a decline in the availability of school chemistry and physics. This observational cohort study considers if the basic sciences of physics, chemistry and biology should be a prerequisite to entering medical school, be part of the core medical curriculum or if they have a place in the practice of medicine. Comparisons of means, correlation and linear regression analysis assessed the degree of association between predictors (school and university basic sciences) and outcomes (year and degree GPA) for entrants to a 6-year Irish medical programme between 2006 and 2009 (n = 352). We found no statistically significant difference in medical programme performance between students with/without prior basic science knowledge. The Irish school exit exam and its components were mainly weak predictors of performance (-0.043 ≥ r ≤ 0.396). Success in year one of medicine, which includes a basic science curriculum, was indicative of later success (0.194 ≥ r (2) ≤ 0.534). University basic sciences were found to be more predictive than school sciences in undergraduate medical performance in our institution. The increasing emphasis of basic sciences in medical practice and the declining availability of school sciences should mandate medical schools in Ireland to consider how removing basic sciences from the curriculum might impact on future applicants.

  7. Statistical Issues for Uncontrolled Reentry Hazards Empirical Tests of the Predicted Footprint for Uncontrolled Satellite Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2011-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, material, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. Because this information is used in making policy and engineering decisions, it is important that these assumptions be tested using empirical data. This study uses the latest database of known uncontrolled reentry locations measured by the United States Department of Defense. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors in the final stages of reentry - including the effects of gravitational harmonics, the effects of the Earth s equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and possibly change the probability of reentering over a given location. In this paper, the measured latitude and longitude distributions of these objects are directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.

  8. Random glucose is useful for individual prediction of type 2 diabetes: results of the Study of Health in Pomerania (SHIP).

    PubMed

    Kowall, Bernd; Rathmann, Wolfgang; Giani, Guido; Schipf, Sabine; Baumeister, Sebastian; Wallaschofski, Henri; Nauck, Matthias; Völzke, Henry

    2013-04-01

    Random glucose is widely used in routine clinical practice. We investigated whether this non-standardized glycemic measure is useful for individual diabetes prediction. The Study of Health in Pomerania (SHIP), a population-based cohort study in north-east Germany, included 3107 diabetes-free persons aged 31-81 years at baseline in 1997-2001. 2475 persons participated at 5-year follow-up and gave self-reports of incident diabetes. For the total sample and for subjects aged ≥50 years, statistical properties of prediction models with and without random glucose were compared. A basic model (including age, sex, diabetes of parents, hypertension and waist circumference) and a comprehensive model (additionally including various lifestyle variables and blood parameters, but not HbA1c) performed statistically significantly better after adding random glucose (e.g., the area under the receiver-operating curve (AROC) increased from 0.824 to 0.856 after adding random glucose to the comprehensive model in the total sample). Likewise, adding random glucose to prediction models which included HbA1c led to significant improvements of predictive ability (e.g., for subjects ≥50 years, AROC increased from 0.824 to 0.849 after adding random glucose to the comprehensive model+HbA1c). Random glucose is useful for individual diabetes prediction, and improves prediction models including HbA1c. Copyright © 2012 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.

  9. Receiver operating characteristic (ROC) curves: review of methods with applications in diagnostic medicine

    NASA Astrophysics Data System (ADS)

    Obuchowski, Nancy A.; Bullen, Jennifer A.

    2018-04-01

    Receiver operating characteristic (ROC) analysis is a tool used to describe the discrimination accuracy of a diagnostic test or prediction model. While sensitivity and specificity are the basic metrics of accuracy, they have many limitations when characterizing test accuracy, particularly when comparing the accuracies of competing tests. In this article we review the basic study design features of ROC studies, illustrate sample size calculations, present statistical methods for measuring and comparing accuracy, and highlight commonly used ROC software. We include descriptions of multi-reader ROC study design and analysis, address frequently seen problems of verification and location bias, discuss clustered data, and provide strategies for testing endpoints in ROC studies. The methods are illustrated with a study of transmission ultrasound for diagnosing breast lesions.

  10. Treated cabin acoustic prediction using statistical energy analysis

    NASA Technical Reports Server (NTRS)

    Yoerkie, Charles A.; Ingraham, Steven T.; Moore, James A.

    1987-01-01

    The application of statistical energy analysis (SEA) to the modeling and design of helicopter cabin interior noise control treatment is demonstrated. The information presented here is obtained from work sponsored at NASA Langley for the development of analytic modeling techniques and the basic understanding of cabin noise. Utility and executive interior models are developed directly from existing S-76 aircraft designs. The relative importance of panel transmission loss (TL), acoustic leakage, and absorption to the control of cabin noise is shown using the SEA modeling parameters. It is shown that the major cabin noise improvement below 1000 Hz comes from increased panel TL, while above 1000 Hz it comes from reduced acoustic leakage and increased absorption in the cabin and overhead cavities.

  11. Prediction system of hydroponic plant growth and development using algorithm Fuzzy Mamdani method

    NASA Astrophysics Data System (ADS)

    Sudana, I. Made; Purnawirawan, Okta; Arief, Ulfa Mediaty

    2017-03-01

    Hydroponics is a method of farming without soil. One of the Hydroponic plants is Watercress (Nasturtium Officinale). The development and growth process of hydroponic Watercress was influenced by levels of nutrients, acidity and temperature. The independent variables can be used as input variable system to predict the value level of plants growth and development. The prediction system is using Fuzzy Algorithm Mamdani method. This system was built to implement the function of Fuzzy Inference System (Fuzzy Inference System/FIS) as a part of the Fuzzy Logic Toolbox (FLT) by using MATLAB R2007b. FIS is a computing system that works on the principle of fuzzy reasoning which is similar to humans' reasoning. Basically FIS consists of four units which are fuzzification unit, fuzzy logic reasoning unit, base knowledge unit and defuzzification unit. In addition to know the effect of independent variables on the plants growth and development that can be visualized with the function diagram of FIS output surface that is shaped three-dimensional, and statistical tests based on the data from the prediction system using multiple linear regression method, which includes multiple linear regression analysis, T test, F test, the coefficient of determination and donations predictor that are calculated using SPSS (Statistical Product and Service Solutions) software applications.

  12. Teaching Basic Probability in Undergraduate Statistics or Management Science Courses

    ERIC Educational Resources Information Center

    Naidu, Jaideep T.; Sanford, John F.

    2017-01-01

    Standard textbooks in core Statistics and Management Science classes present various examples to introduce basic probability concepts to undergraduate business students. These include tossing of a coin, throwing a die, and examples of that nature. While these are good examples to introduce basic probability, we use improvised versions of Russian…

  13. Intuitive statistics by 8-month-old infants

    PubMed Central

    Xu, Fei; Garcia, Vashti

    2008-01-01

    Human learners make inductive inferences based on small amounts of data: we generalize from samples to populations and vice versa. The academic discipline of statistics formalizes these intuitive statistical inferences. What is the origin of this ability? We report six experiments investigating whether 8-month-old infants are “intuitive statisticians.” Our results showed that, given a sample, the infants were able to make inferences about the population from which the sample had been drawn. Conversely, given information about the entire population of relatively small size, the infants were able to make predictions about the sample. Our findings provide evidence that infants possess a powerful mechanism for inductive learning, either using heuristics or basic principles of probability. This ability to make inferences based on samples or information about the population develops early and in the absence of schooling or explicit teaching. Human infants may be rational learners from very early in development. PMID:18378901

  14. A Wave Chaotic Study of Quantum Graphs with Microwave Networks

    NASA Astrophysics Data System (ADS)

    Fu, Ziyuan

    Quantum graphs provide a setting to test the hypothesis that all ray-chaotic systems show universal wave chaotic properties. I study the quantum graphs with a wave chaotic approach. Here, an experimental setup consisting of a microwave coaxial cable network is used to simulate quantum graphs. Some basic features and the distributions of impedance statistics are analyzed from experimental data on an ensemble of tetrahedral networks. The random coupling model (RCM) is applied in an attempt to uncover the universal statistical properties of the system. Deviations from RCM predictions have been observed in that the statistics of diagonal and off-diagonal impedance elements are different. Waves trapped due to multiple reflections on bonds between nodes in the graph most likely cause the deviations from universal behavior in the finite-size realization of a quantum graph. In addition, I have done some investigations on the Random Coupling Model, which are useful for further research.

  15. Statistical characterization of thermal plumes in turbulent thermal convection

    NASA Astrophysics Data System (ADS)

    Zhou, Sheng-Qi; Xie, Yi-Chao; Sun, Chao; Xia, Ke-Qing

    2016-09-01

    We report an experimental study on the statistical properties of the thermal plumes in turbulent thermal convection. A method has been proposed to extract the basic characteristics of thermal plumes from temporal temperature measurement inside the convection cell. It has been found that both plume amplitude A and cap width w , in a time domain, are approximately in the log-normal distribution. In particular, the normalized most probable front width is found to be a characteristic scale of thermal plumes, which is much larger than the thermal boundary layer thickness. Over a wide range of the Rayleigh number, the statistical characterizations of the thermal fluctuations of plumes, and the turbulent background, the plume front width and plume spacing have been discussed and compared with the theoretical predictions and morphological observations. For the most part good agreements have been found with the direct observations.

  16. Rigorous Statistical Bounds in Uncertainty Quantification for One-Layer Turbulent Geophysical Flows

    NASA Astrophysics Data System (ADS)

    Qi, Di; Majda, Andrew J.

    2018-04-01

    Statistical bounds controlling the total fluctuations in mean and variance about a basic steady-state solution are developed for the truncated barotropic flow over topography. Statistical ensemble prediction is an important topic in weather and climate research. Here, the evolution of an ensemble of trajectories is considered using statistical instability analysis and is compared and contrasted with the classical deterministic instability for the growth of perturbations in one pointwise trajectory. The maximum growth of the total statistics in fluctuations is derived relying on the statistical conservation principle of the pseudo-energy. The saturation bound of the statistical mean fluctuation and variance in the unstable regimes with non-positive-definite pseudo-energy is achieved by linking with a class of stable reference states and minimizing the stable statistical energy. Two cases with dependence on initial statistical uncertainty and on external forcing and dissipation are compared and unified under a consistent statistical stability framework. The flow structures and statistical stability bounds are illustrated and verified by numerical simulations among a wide range of dynamical regimes, where subtle transient statistical instability exists in general with positive short-time exponential growth in the covariance even when the pseudo-energy is positive-definite. Among the various scenarios in this paper, there exist strong forward and backward energy exchanges between different scales which are estimated by the rigorous statistical bounds.

  17. Statistics for nuclear engineers and scientists. Part 1. Basic statistical inference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beggs, W.J.

    1981-02-01

    This report is intended for the use of engineers and scientists working in the nuclear industry, especially at the Bettis Atomic Power Laboratory. It serves as the basis for several Bettis in-house statistics courses. The objectives of the report are to introduce the reader to the language and concepts of statistics and to provide a basic set of techniques to apply to problems of the collection and analysis of data. Part 1 covers subjects of basic inference. The subjects include: descriptive statistics; probability; simple inference for normally distributed populations, and for non-normal populations as well; comparison of two populations; themore » analysis of variance; quality control procedures; and linear regression analysis.« less

  18. Prediction of shock initiation thresholds and ignition probability of polymer-bonded explosives using mesoscale simulations

    NASA Astrophysics Data System (ADS)

    Kim, Seokpum; Wei, Yaochi; Horie, Yasuyuki; Zhou, Min

    2018-05-01

    The design of new materials requires establishment of macroscopic measures of material performance as functions of microstructure. Traditionally, this process has been an empirical endeavor. An approach to computationally predict the probabilistic ignition thresholds of polymer-bonded explosives (PBXs) using mesoscale simulations is developed. The simulations explicitly account for microstructure, constituent properties, and interfacial responses and capture processes responsible for the development of hotspots and damage. The specific mechanisms tracked include viscoelasticity, viscoplasticity, fracture, post-fracture contact, frictional heating, and heat conduction. The probabilistic analysis uses sets of statistically similar microstructure samples to directly mimic relevant experiments for quantification of statistical variations of material behavior due to inherent material heterogeneities. The particular thresholds and ignition probabilities predicted are expressed in James type and Walker-Wasley type relations, leading to the establishment of explicit analytical expressions for the ignition probability as function of loading. Specifically, the ignition thresholds corresponding to any given level of ignition probability and ignition probability maps are predicted for PBX 9404 for the loading regime of Up = 200-1200 m/s where Up is the particle speed. The predicted results are in good agreement with available experimental measurements. A parametric study also shows that binder properties can significantly affect the macroscopic ignition behavior of PBXs. The capability to computationally predict the macroscopic engineering material response relations out of material microstructures and basic constituent and interfacial properties lends itself to the design of new materials as well as the analysis of existing materials.

  19. Analysis of basic clustering algorithms for numerical estimation of statistical averages in biomolecules.

    PubMed

    Anandakrishnan, Ramu; Onufriev, Alexey

    2008-03-01

    In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.

  20. Contrasting effects of feature-based statistics on the categorisation and basic-level identification of visual objects.

    PubMed

    Taylor, Kirsten I; Devereux, Barry J; Acres, Kadia; Randall, Billi; Tyler, Lorraine K

    2012-03-01

    Conceptual representations are at the heart of our mental lives, involved in every aspect of cognitive functioning. Despite their centrality, a long-standing debate persists as to how the meanings of concepts are represented and processed. Many accounts agree that the meanings of concrete concepts are represented by their individual features, but disagree about the importance of different feature-based variables: some views stress the importance of the information carried by distinctive features in conceptual processing, others the features which are shared over many concepts, and still others the extent to which features co-occur. We suggest that previously disparate theoretical positions and experimental findings can be unified by an account which claims that task demands determine how concepts are processed in addition to the effects of feature distinctiveness and co-occurrence. We tested these predictions in a basic-level naming task which relies on distinctive feature information (Experiment 1) and a domain decision task which relies on shared feature information (Experiment 2). Both used large-scale regression designs with the same visual objects, and mixed-effects models incorporating participant, session, stimulus-related and feature statistic variables to model the performance. We found that concepts with relatively more distinctive and more highly correlated distinctive relative to shared features facilitated basic-level naming latencies, while concepts with relatively more shared and more highly correlated shared relative to distinctive features speeded domain decisions. These findings demonstrate that the feature statistics of distinctiveness (shared vs. distinctive) and correlational strength, as well as the task demands, determine how concept meaning is processed in the conceptual system. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. A Mediation Model to Explain the Role of Mathematics Skills and Probabilistic Reasoning on Statistics Achievement

    ERIC Educational Resources Information Center

    Primi, Caterina; Donati, Maria Anna; Chiesi, Francesca

    2016-01-01

    Among the wide range of factors related to the acquisition of statistical knowledge, competence in basic mathematics, including basic probability, has received much attention. In this study, a mediation model was estimated to derive the total, direct, and indirect effects of mathematical competence on statistics achievement taking into account…

  2. Improvement of cardiovascular risk prediction: time to review current knowledge, debates, and fundamentals on how to assess test characteristics.

    PubMed

    Romanens, Michel; Ackermann, Franz; Spence, John David; Darioli, Roger; Rodondi, Nicolas; Corti, Roberto; Noll, Georg; Schwenkglenks, Matthias; Pencina, Michael

    2010-02-01

    Cardiovascular risk assessment might be improved with the addition of emerging, new tests derived from atherosclerosis imaging, laboratory tests or functional tests. This article reviews relative risk, odds ratios, receiver-operating curves, posttest risk calculations based on likelihood ratios, the net reclassification improvement and integrated discrimination. This serves to determine whether a new test has an added clinical value on top of conventional risk testing and how this can be verified statistically. Two clinically meaningful examples serve to illustrate novel approaches. This work serves as a review and basic work for the development of new guidelines on cardiovascular risk prediction, taking into account emerging tests, to be proposed by members of the 'Taskforce on Vascular Risk Prediction' under the auspices of the Working Group 'Swiss Atherosclerosis' of the Swiss Society of Cardiology in the future.

  3. Statistical Power for the Comparative Regression Discontinuity Design With a Pretest No-Treatment Control Function: Theory and Evidence From the National Head Start Impact Study.

    PubMed

    Tang, Yang; Cook, Thomas D

    2018-01-01

    The basic regression discontinuity design (RDD) has less statistical power than a randomized control trial (RCT) with the same sample size. Adding a no-treatment comparison function to the basic RDD creates a comparative RDD (CRD); and when this function comes from the pretest value of the study outcome, a CRD-Pre design results. We use a within-study comparison (WSC) to examine the power of CRD-Pre relative to both basic RDD and RCT. We first build the theoretical foundation for power in CRD-Pre, then derive the relevant variance formulae, and finally compare them to the theoretical RCT variance. We conclude from this theoretical part of this article that (1) CRD-Pre's power gain depends on the partial correlation between the pretest and posttest measures after conditioning on the assignment variable, (2) CRD-Pre is less responsive than basic RDD to how the assignment variable is distributed and where the cutoff is located, and (3) under a variety of conditions, the efficiency of CRD-Pre is very close to that of the RCT. Data from the National Head Start Impact Study are then used to construct RCT, RDD, and CRD-Pre designs and to compare their power. The empirical results indicate (1) a high level of correspondence between the predicted and obtained power results for RDD and CRD-Pre relative to the RCT, and (2) power levels in CRD-Pre and RCT that are very close. The study is unique among WSCs for its focus on the correspondence between RCT and observational study standard errors rather than means.

  4. Comparison of the performances of copeptin and multiple biomarkers in long-term prognosis of severe traumatic brain injury.

    PubMed

    Zhang, Zu-Yong; Zhang, Li-Xin; Dong, Xiao-Qiao; Yu, Wen-Hua; Du, Quan; Yang, Ding-Bo; Shen, Yong-Feng; Wang, Hao; Zhu, Qiang; Che, Zhi-Hao; Liu, Qun-Jie; Jiang, Li; Du, Yuan-Feng

    2014-10-01

    Enhanced blood levels of copeptin correlate with poor clinical outcomes after acute critical illness. This study aimed to compare the prognostic performances of plasma concentrations of copeptin and other biomarkers like myelin basic protein, glial fibrillary astrocyte protein, S100B, neuron-specific enolase, phosphorylated axonal neurofilament subunit H, Tau and ubiquitin carboxyl-terminal hydrolase L1 in severe traumatic brain injury. We recruited 102 healthy controls and 102 acute patients with severe traumatic brain injury. Plasma concentrations of these biomarkers were determined using enzyme-linked immunosorbent assay. Their prognostic predictive performances of 6-month mortality and unfavorable outcome (Glasgow Outcome Scale score of 1-3) were compared. Plasma concentrations of these biomarkers were statistically significantly higher in all patients than in healthy controls, in non-survivors than in survivors and in patients with unfavorable outcome than with favorable outcome. Areas under receiver operating characteristic curves of plasma concentrations of these biomarkers were similar to those of Glasgow Coma Scale score for prognostic prediction. Except plasma copeptin concentration, other biomarkers concentrations in plasma did not statistically significantly improve prognostic predictive value of Glasgow Coma Scale score. Copeptin levels may be a useful tool to predict long-term clinical outcomes after severe traumatic brain injury and have a potential to assist clinicians. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. A Novel Approach for Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Chen, Ya-Chin; Juang, Jer-Nan

    1998-01-01

    Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.

  6. [Prediction of life expectancy for prostate cancer patients based on the kinetic theory of aging of living systems].

    PubMed

    Viktorov, A A; Zharinov, G M; Neklasova, N Ju; Morozova, E E

    2017-01-01

    The article presents a methodical approach for prediction of life expectancy for people diagnosed with prostate cancer based on the kinetic theory of aging of living systems. The life expectancy is calculated by solving the differential equation for the rate of aging for three different stage of life - «normal» life, life with prostate cancer and life after combination therapy for prostate cancer. The mathematical model of aging for each stage of life has its own parameters identified by the statistical analysis of healthcare data from the Zharinov's databank and Rosstat CDR NES databank. The core of the methodical approach is the statistical correlation between growth rate of the prostate specific antigen level (PSA-level) or the PSA doubling time (PSA DT) before therapy, and lifespan: the higher the PSA DT is, the greater lifespan. The patients were grouped under the «fast PSA DT» and «slow PSA DT» categories. The satisfactory matching between calculations and experiment is shown. The prediction error of group life expectancy is due to the completeness and reliability of the main data source. A detailed monitoring of the basic health indicators throughout the each person life in each analyzed group is required. The absence of this particular information makes it impossible to predict the individual life expectancy.

  7. Application of Statistical Thermodynamics To Predict the Adsorption Properties of Polypeptides in Reversed-Phase HPLC.

    PubMed

    Tarasova, Irina A; Goloborodko, Anton A; Perlova, Tatyana Y; Pridatchenko, Marina L; Gorshkov, Alexander V; Evreinov, Victor V; Ivanov, Alexander R; Gorshkov, Mikhail V

    2015-07-07

    The theory of critical chromatography for biomacromolecules (BioLCCC) describes polypeptide retention in reversed-phase HPLC using the basic principles of statistical thermodynamics. However, whether this theory correctly depicts a variety of empirical observations and laws introduced for peptide chromatography over the last decades remains to be determined. In this study, by comparing theoretical results with experimental data, we demonstrate that the BioLCCC: (1) fits the empirical dependence of the polypeptide retention on the amino acid sequence length with R(2) > 0.99 and allows in silico determination of the linear regression coefficients of the log-length correction in the additive model for arbitrary sequences and lengths and (2) predicts the distribution coefficients of polypeptides with an accuracy from 0.98 to 0.99 R(2). The latter enables direct calculation of the retention factors for given solvent compositions and modeling of the migration dynamics of polypeptides separated under isocratic or gradient conditions. The obtained results demonstrate that the suggested theory correctly relates the main aspects of polypeptide separation in reversed-phase HPLC.

  8. Inter-model Diversity of ENSO simulation and its relation to basic states

    NASA Astrophysics Data System (ADS)

    Kug, J. S.; Ham, Y. G.

    2016-12-01

    In this study, a new methodology is developed to improve the climate simulation of state-of-the-art coupledglobal climate models (GCMs), by a postprocessing based on the intermodel diversity. Based on the closeconnection between the interannual variability and climatological states, the distinctive relation between theintermodel diversity of the interannual variability and that of the basic state is found. Based on this relation,the simulated interannual variabilities can be improved, by correcting their climatological bias. To test thismethodology, the dominant intermodel difference in precipitation responses during El Niño-SouthernOscillation (ENSO) is investigated, and its relationship with climatological state. It is found that the dominantintermodel diversity of the ENSO precipitation in phase 5 of the Coupled Model Intercomparison Project(CMIP5) is associated with the zonal shift of the positive precipitation center during El Niño. This dominantintermodel difference is significantly correlated with the basic states. The models with wetter (dryer) climatologythan the climatology of the multimodel ensemble (MME) over the central Pacific tend to shift positiveENSO precipitation anomalies to the east (west). Based on the model's systematic errors in atmosphericENSO response and bias, the models with better climatological state tend to simulate more realistic atmosphericENSO responses.Therefore, the statistical method to correct the ENSO response mostly improves the ENSO response. Afterthe statistical correction, simulating quality of theMMEENSO precipitation is distinctively improved. Theseresults provide a possibility that the present methodology can be also applied to improving climate projectionand seasonal climate prediction.

  9. Document Delivery Capabilities of Major Biomedical Libraries in 1968: Results of a National Survey Employing Standardized Tests *

    PubMed Central

    Orr, Richard H.; Schless, Arthur P.

    1972-01-01

    The standardized Document Delivery Tests (DDT's) developed earlier (Bulletin 56: 241-267, July 1968) were employed to assess the capability of ninety-two medical school libraries for meeting the document needs of biomedical researchers, and the capability of fifteen major resource libraries for filling I-L requests from biomedical libraries. The primary test data are summarized as statistics on the observed availability status of the 300 plus documents in the test samples, and as measures expressing capability as a function of the mean time that would be required for users to obtain test sample documents. A mathematical model is developed in which the virtual capability of a library, as seen by its users, equals the algebraic sum of the basic capability afforded by its holdings; the combined losses attributable to use of its collection, processing, relative inacessibility, and housekeeping problems; and the gain realized by coupling with other resources (I-L borrowing). For a particular library, or group of libraries, empirical values for each of these variables can be calculated easily from the capability measures and the status statistics. Regression equations are derived that provide useful predictions of basic capability from collection size. The most important result of this work is that cost-effectiveness analyses can now be used as practical decision aids in managing a basic library service. A program of periodic surveys and further development of DDT's is recommended as appropriate for the Medical Library Association. PMID:5054305

  10. Cognitive predictors of skilled performance with an advanced upper limb multifunction prosthesis: a preliminary analysis.

    PubMed

    Hancock, Laura; Correia, Stephen; Ahern, David; Barredo, Jennifer; Resnik, Linda

    2017-07-01

    Purpose The objectives were to 1) identify major cognitive domains involved in learning to use the DEKA Arm; 2) specify cognitive domain-specific skills associated with basic versus advanced users; and 3) examine whether baseline memory and executive function predicted learning. Method Sample included 35 persons with upper limb amputation. Subjects were administered a brief neuropsychological test battery prior to start of DEKA Arm training, as well as physical performance measures at the onset of, and following training. Multiple regression models controlling for age and including neuropsychological tests were developed to predict physical performance scores. Prosthetic performance scores were divided into quartiles and independent samples t-tests compared neuropsychological test scores of advanced scorers and basic scorers. Baseline neuropsychological test scores were used to predict change in scores on physical performance measures across time. Results Cognitive domains of attention and processing speed were statistically significantly related to proficiency of DEKA Arm use and predicted level of proficiency. Conclusions Results support use of neuropsychological tests to predict learning and use of a multifunctional prosthesis. Assessment of cognitive status at the outset of training may help set expectations for the duration and outcomes of treatment. Implications for Rehabilitation Cognitive domains of attention and processing speed were significantly related to level of proficiencyof an advanced multifunctional prosthesis (the DEKA Arm) after training. Results provide initial support for the use of neuropsychological tests to predict advanced learningand use of a multifunctional prosthesis in upper-limb amputees. Results suggest that assessment of patients' cognitive status at the outset of upper limb prosthetictraining may, in the future, help patients, their families and therapists set expectations for theduration and intensity of training and may help set reasonable proficiency goals.

  11. The External Validity of Prediction Models for the Diagnosis of Obstructive Coronary Artery Disease in Patients With Stable Chest Pain: Insights From the PROMISE Trial.

    PubMed

    Genders, Tessa S S; Coles, Adrian; Hoffmann, Udo; Patel, Manesh R; Mark, Daniel B; Lee, Kerry L; Steyerberg, Ewout W; Hunink, M G Myriam; Douglas, Pamela S

    2018-03-01

    This study sought to externally validate prediction models for the presence of obstructive coronary artery disease (CAD). A better assessment of the probability of CAD may improve the identification of patients who benefit from noninvasive testing. Stable chest pain patients from the PROMISE (Prospective Multicenter Imaging Study for Evaluation of Chest Pain) trial with computed tomography angiography (CTA) or invasive coronary angiography (ICA) were included. The authors assumed that patients with CTA showing 0% stenosis and a coronary artery calcium (CAC) score of 0 were free of obstructive CAD (≥50% stenosis) on ICA, and they multiply imputed missing ICA results based on clinical variables and CTA results. Predicted CAD probabilities were calculated using published coefficients for 3 models: basic model (age, sex, chest pain type), clinical model (basic model + diabetes, hypertension, dyslipidemia, and smoking), and clinical + CAC score model. The authors assessed discrimination and calibration, and compared published effects with observed predictor effects. In 3,468 patients (1,805 women; mean 60 years of age; 779 [23%] with obstructive CAD on CTA), the models demonstrated moderate-good discrimination, with C-statistics of 0.69 (95% confidence interval [CI]: 0.67 to 0.72), 0.72 (95% CI: 0.69 to 0.74), and 0.86 (95% CI: 0.85 to 0.88) for the basic, clinical, and clinical + CAC score models, respectively. Calibration was satisfactory although typical chest pain and diabetes were less predictive and CAC score was more predictive than was suggested by the models. Among the 31% of patients for whom the clinical model predicted a low (≤10%) probability of CAD, actual prevalence was 7%; among the 48% for whom the clinical + CAC score model predicted a low probability the observed prevalence was 2%. In 2 sensitivity analyses excluding imputed data, similar results were obtained using CTA as the outcome, whereas in those who underwent ICA the models significantly underestimated CAD probability. Existing clinical prediction models can identify patients with a low probability of obstructive CAD. Obstructive CAD on ICA was imputed for 61% of patients; hence, further validation is necessary. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  12. Species distribution models for a migratory bird based on citizen science and satellite tracking data

    USGS Publications Warehouse

    Coxen, Christopher L.; Frey, Jennifer K.; Carleton, Scott A.; Collins, Daniel P.

    2017-01-01

    Species distribution models can provide critical baseline distribution information for the conservation of poorly understood species. Here, we compared the performance of band-tailed pigeon (Patagioenas fasciata) species distribution models created using Maxent and derived from two separate presence-only occurrence data sources in New Mexico: 1) satellite tracked birds and 2) observations reported in eBird basic data set. Both models had good accuracy (test AUC > 0.8 and True Skill Statistic > 0.4), and high overlap between suitability scores (I statistic 0.786) and suitable habitat patches (relative rank 0.639). Our results suggest that, at the state-wide level, eBird occurrence data can effectively model similar species distributions as satellite tracking data. Climate change models for the band-tailed pigeon predict a 35% loss in area of suitable climate by 2070 if CO2 emissions drop to 1990 levels by 2100, and a 45% loss by 2070 if we continue current CO2 emission levels through the end of the century. These numbers may be conservative given the predicted increase in drought, wildfire, and forest pest impacts to the coniferous forests the species inhabits in New Mexico. The northern portion of the species’ range in New Mexico is predicted to be the most viable through time.

  13. Basic EMC (Electromagnetic compatibility) technology advancement for C3 (Command, control, and communications) systems. Volume 6

    NASA Astrophysics Data System (ADS)

    Weiner, D.; Paul, C. R.; Whalen, J.

    1985-04-01

    This research effort was devoted to eliminating some of the basic technological gaps in the two important areas of: (1) electromagnetic effects (EM) on microelectronic circuits and (2) EM coupling and testing. The results are presented in fourteen reports which have been organized into six volumes. The reports are briefly summarized in this volume. In addition, an experiment is described which was performed to demonstrate the feasibility of applying several of the results to a problem involving electromagnetic interference. Specifically, experimental results are provided for the randomness associated with: (1) crosstalk in cable harnesses and (2) demodulation of amplitude modulated (AM) signals in operational amplifiers. These results are combined to predict candidate probability density functions (pdf's) for the amplitude of an AM interfering signal required to turn on a light emitting diode. The candidate pdf's are shown to be statistically consistent with measured data.

  14. The energetic cost of walking: a comparison of predictive methods.

    PubMed

    Kramer, Patricia Ann; Sylvester, Adam D

    2011-01-01

    The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended to other species.

  15. Interpretation of the results of statistical measurements. [search for basic probability model

    NASA Technical Reports Server (NTRS)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  16. Perceptions of D.M.D. student readiness for basic science courses in the United States: can online review modules help?

    PubMed

    Miller, C J; Aiken, S A; Metz, M J

    2015-02-01

    There can be a disconnect between the level of content covered in undergraduate coursework and the expectations of professional-level faculty of their incoming students. Some basic science faculty members may assume that students have a good knowledge base in the material and neglect to appropriately review, whilst others may spend too much class time reviewing basic material. It was hypothesised that the replacement of introductory didactic physiology lectures with interactive online modules could improve student preparedness prior to lectures. These modules would also allow faculty members to analyse incoming student abilities and save valuable face-to-face class time for alternative teaching strategies. Results indicated that the performance levels of incoming U.S. students were poor (57% average on a pre-test), and students often under-predicted their abilities (by 13% on average). Faculty expectations varied greatly between the different content areas and did not appear to correlate with the actual student performance. Three review modules were created which produced a statistically significant increase in post-test scores (46% increase, P < 0.0001, n = 114-115). The positive results of this study suggest a need to incorporate online review units in the basic science dental school courses and revise introductory material tailored to students' strengths and needs.

  17. Applied Problems and Use of Technology in an Aligned Way in Basic Courses in Probability and Statistics for Engineering Students--A Way to Enhance Understanding and Increase Motivation

    ERIC Educational Resources Information Center

    Zetterqvist, Lena

    2017-01-01

    Researchers and teachers often recommend motivating exercises and use of mathematics or statistics software for the teaching of basic courses in probability and statistics. Our courses are given to large groups of engineering students at Lund Institute of Technology. We found that the mere existence of real-life data and technology in a course…

  18. A basic introduction to statistics for the orthopaedic surgeon.

    PubMed

    Bertrand, Catherine; Van Riet, Roger; Verstreken, Frederik; Michielsen, Jef

    2012-02-01

    Orthopaedic surgeons should review the orthopaedic literature in order to keep pace with the latest insights and practices. A good understanding of basic statistical principles is of crucial importance to the ability to read articles critically, to interpret results and to arrive at correct conclusions. This paper explains some of the key concepts in statistics, including hypothesis testing, Type I and Type II errors, testing of normality, sample size and p values.

  19. Researches on High Accuracy Prediction Methods of Earth Orientation Parameters

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.

    2015-09-01

    The Earth rotation reflects the coupling process among the solid Earth, atmosphere, oceans, mantle, and core of the Earth on multiple spatial and temporal scales. The Earth rotation can be described by the Earth's orientation parameters, which are abbreviated as EOP (mainly including two polar motion components PM_X and PM_Y, and variation in the length of day ΔLOD). The EOP is crucial in the transformation between the terrestrial and celestial reference systems, and has important applications in many areas such as the deep space exploration, satellite precise orbit determination, and astrogeodynamics. However, the EOP products obtained by the space geodetic technologies generally delay by several days to two weeks. The growing demands for modern space navigation make high-accuracy EOP prediction be a worthy topic. This thesis is composed of the following three aspects, for the purpose of improving the EOP forecast accuracy. (1) We analyze the relation between the length of the basic data series and the EOP forecast accuracy, and compare the EOP prediction accuracy for the linear autoregressive (AR) model and the nonlinear artificial neural network (ANN) method by performing the least squares (LS) extrapolations. The results show that the high precision forecast of EOP can be realized by appropriate selection of the basic data series length according to the required time span of EOP prediction: for short-term prediction, the basic data series should be shorter, while for the long-term prediction, the series should be longer. The analysis also showed that the LS+AR model is more suitable for the short-term forecasts, while the LS+ANN model shows the advantages in the medium- and long-term forecasts. (2) We develop for the first time a new method which combines the autoregressive model and Kalman filter (AR+Kalman) in short-term EOP prediction. The equations of observation and state are established using the EOP series and the autoregressive coefficients respectively, which are used to improve/re-evaluate the AR model. Comparing to the single AR model, the AR+Kalman method performs better in the prediction of UT1-UTC and ΔLOD, and the improvement in the prediction of the polar motion is significant. (3) Following the successful Earth Orientation Parameter Prediction Comparison Campaign (EOP PCC), the Earth Orientation Parameter Combination of Prediction Pilot Project (EOPC PPP) was sponsored in 2010. As one of the participants from China, we update and submit the short- and medium-term (1 to 90 days) EOP predictions every day. From the current comparative statistics, our prediction accuracy is on the medium international level. We will carry out more innovative researches to improve the EOP forecast accuracy and enhance our level in EOP forecast.

  20. Cognitive ability in young adulthood predicts risk of early-onset dementia in Finnish men.

    PubMed

    Rantalainen, Ville; Lahti, Jari; Henriksson, Markus; Kajantie, Eero; Eriksson, Johan G; Räikkönen, Katri

    2018-06-06

    To test if the Finnish Defence Forces Basic Intellectual Ability Test scores at 20.1 years predicted risk of organic dementia or Alzheimer disease (AD). Dementia was defined as inpatient or outpatient diagnosis of organic dementia or AD risk derived from Hospital Discharge or Causes of Death Registers in 2,785 men from the Helsinki Birth Cohort Study, divided based on age at first diagnosis into early onset (<65 years) or late onset (≥65 years). The Finnish Defence Forces Basic Intellectual Ability Test comprises verbal, arithmetic, and visuospatial subtests and a total score (scores transformed into a mean of 100 and SD of 15). We used Cox proportional hazard models and adjusted for age at testing, childhood socioeconomic status, mother's age at delivery, parity, participant's birthweight, education, and stroke or coronary heart disease diagnosis. Lower cognitive ability total and verbal ability (hazard ratio [HR] per 1 SD disadvantage >1.69, 95% confidence interval [CI] 1.01-2.63) scores predicted higher early-onset any dementia risk across the statistical models; arithmetic and visuospatial ability scores were similarly associated with early-onset any dementia risk, but these associations weakened after covariate adjustments (HR per 1 SD disadvantage >1.57, 95% CI 0.96-2.57). All associations were rendered nonsignificant when we adjusted for participant's education. Cognitive ability did not predict late-onset dementia risk. These findings reinforce previous suggestions that lower cognitive ability in early life is a risk factor for early-onset dementia. © 2018 American Academy of Neurology.

  1. Prediction/discussion-based learning cycle versus conceptual change text: comparative effects on students' understanding of genetics

    NASA Astrophysics Data System (ADS)

    khawaldeh, Salem A. Al

    2013-07-01

    Background and purpose: The purpose of this study was to investigate the comparative effects of a prediction/discussion-based learning cycle (HPD-LC), conceptual change text (CCT) and traditional instruction on 10th grade students' understanding of genetics concepts. Sample: Participants were 112 10th basic grade male students in three classes of the same school located in an urban area. The three classes taught by the same biology teacher were randomly assigned as a prediction/discussion-based learning cycle class (n = 39), conceptual change text class (n = 37) and traditional class (n = 36). Design and method: A quasi-experimental research design of pre-test-post-test non-equivalent control group was adopted. Participants completed the Genetics Concept Test as pre-test-post-test, to examine the effects of instructional strategies on their genetics understanding. Pre-test scores and Test of Logical Thinking scores were used as covariates. Results: The analysis of covariance showed a statistically significant difference between the experimental and control groups in the favor of experimental groups after treatment. However, no statistically significant difference between the experimental groups (HPD-LC versus CCT instruction) was found. Conclusions: Overall, the findings of this study support the use of the prediction/discussion-based learning cycle and conceptual change text in both research and teaching. The findings may be useful for improving classroom practices in teaching science concepts and for the development of suitable materials promoting students' understanding of science.

  2. Using Data Mining to Teach Applied Statistics and Correlation

    ERIC Educational Resources Information Center

    Hartnett, Jessica L.

    2016-01-01

    This article describes two class activities that introduce the concept of data mining and very basic data mining analyses. Assessment data suggest that students learned some of the conceptual basics of data mining, understood some of the ethical concerns related to the practice, and were able to perform correlations via the Statistical Package for…

  3. Simple Data Sets for Distinct Basic Summary Statistics

    ERIC Educational Resources Information Center

    Lesser, Lawrence M.

    2011-01-01

    It is important to avoid ambiguity with numbers because unfortunate choices of numbers can inadvertently make it possible for students to form misconceptions or make it difficult for teachers to tell if students obtained the right answer for the right reason. Therefore, it is important to make sure when introducing basic summary statistics that…

  4. Checking the predictive accuracy of basic symptoms against ultra high-risk criteria and testing of a multivariable prediction model: Evidence from a prospective three-year observational study of persons at clinical high-risk for psychosis.

    PubMed

    Hengartner, M P; Heekeren, K; Dvorsky, D; Walitza, S; Rössler, W; Theodoridou, A

    2017-09-01

    The aim of this study was to critically examine the prognostic validity of various clinical high-risk (CHR) criteria alone and in combination with additional clinical characteristics. A total of 188 CHR positive persons from the region of Zurich, Switzerland (mean age 20.5 years; 60.2% male), meeting ultra high-risk (UHR) and/or basic symptoms (BS) criteria, were followed over three years. The test battery included the Structured Interview for Prodromal Syndromes (SIPS), verbal IQ and many other screening tools. Conversion to psychosis was defined according to ICD-10 criteria for schizophrenia (F20) or brief psychotic disorder (F23). Altogether n=24 persons developed manifest psychosis within three years and according to Kaplan-Meier survival analysis, the projected conversion rate was 17.5%. The predictive accuracy of UHR was statistically significant but poor (area under the curve [AUC]=0.65, P<.05), whereas BS did not predict psychosis beyond mere chance (AUC=0.52, P=.730). Sensitivity and specificity were 0.83 and 0.47 for UHR, and 0.96 and 0.09 for BS. UHR plus BS achieved an AUC=0.66, with sensitivity and specificity of 0.75 and 0.56. In comparison, baseline antipsychotic medication yielded a predictive accuracy of AUC=0.62 (sensitivity=0.42; specificity=0.82). A multivariable prediction model comprising continuous measures of positive symptoms and verbal IQ achieved a substantially improved prognostic accuracy (AUC=0.85; sensitivity=0.86; specificity=0.85; positive predictive value=0.54; negative predictive value=0.97). We showed that BS have no predictive accuracy beyond chance, while UHR criteria poorly predict conversion to psychosis. Combining BS with UHR criteria did not improve the predictive accuracy of UHR alone. In contrast, dimensional measures of both positive symptoms and verbal IQ showed excellent prognostic validity. A critical re-thinking of binary at-risk criteria is necessary in order to improve the prognosis of psychotic disorders. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  5. Premixing quality and flame stability: A theoretical and experimental study

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.; Heywood, J. B.; Tabaczynski, R. J.

    1979-01-01

    Models for predicting flame ignition and blowout in a combustor primary zone are presented. A correlation for the blowoff velocity of premixed turbulent flames is developed using the basic quantities of turbulent flow, and the laminar flame speed. A statistical model employing a Monte Carlo calculation procedure is developed to account for nonuniformities in a combustor primary zone. An overall kinetic rate equation is used to describe the fuel oxidation process. The model is used to predict the lean ignition and blow out limits of premixed turbulent flames; the effects of mixture nonuniformity on the lean ignition limit are explored using an assumed distribution of fuel-air ratios. Data on the effects of variations in inlet temperature, reference velocity and mixture uniformity on the lean ignition and blowout limits of gaseous propane-air flames are presented.

  6. A simple approach to polymer mixture miscibility.

    PubMed

    Higgins, Julia S; Lipson, Jane E G; White, Ronald P

    2010-03-13

    Polymeric mixtures are important materials, but the control and understanding of mixing behaviour poses problems. The original Flory-Huggins theoretical approach, using a lattice model to compute the statistical thermodynamics, provides the basic understanding of the thermodynamic processes involved but is deficient in describing most real systems, and has little or no predictive capability. We have developed an approach using a lattice integral equation theory, and in this paper we demonstrate that this not only describes well the literature data on polymer mixtures but allows new insights into the behaviour of polymers and their mixtures. The characteristic parameters obtained by fitting the data have been successfully shown to be transferable from one dataset to another, to be able to correctly predict behaviour outside the experimental range of the original data and to allow meaningful comparisons to be made between different polymer mixtures.

  7. Key statistical and analytical issues for evaluating treatment effects in periodontal research.

    PubMed

    Tu, Yu-Kang; Gilthorpe, Mark S

    2012-06-01

    Statistics is an indispensible tool for evaluating treatment effects in clinical research. Due to the complexities of periodontal disease progression and data collection, statistical analyses for periodontal research have been a great challenge for both clinicians and statisticians. The aim of this article is to provide an overview of several basic, but important, statistical issues related to the evaluation of treatment effects and to clarify some common statistical misconceptions. Some of these issues are general, concerning many disciplines, and some are unique to periodontal research. We first discuss several statistical concepts that have sometimes been overlooked or misunderstood by periodontal researchers. For instance, decisions about whether to use the t-test or analysis of covariance, or whether to use parametric tests such as the t-test or its non-parametric counterpart, the Mann-Whitney U-test, have perplexed many periodontal researchers. We also describe more advanced methodological issues that have sometimes been overlooked by researchers. For instance, the phenomenon of regression to the mean is a fundamental issue to be considered when evaluating treatment effects, and collinearity amongst covariates is a conundrum that must be resolved when explaining and predicting treatment effects. Quick and easy solutions to these methodological and analytical issues are not always available in the literature, and careful statistical thinking is paramount when conducting useful and meaningful research. © 2012 John Wiley & Sons A/S.

  8. A Methodology for Determining Statistical Performance Compliance for Airborne Doppler Radar with Forward-Looking Turbulence Detection Capability

    NASA Technical Reports Server (NTRS)

    Bowles, Roland L.; Buck, Bill K.

    2009-01-01

    The objective of the research developed and presented in this document was to statistically assess turbulence hazard detection performance employing airborne pulse Doppler radar systems. The FAA certification methodology for forward looking airborne turbulence radars will require estimating the probabilities of missed and false hazard indications under operational conditions. Analytical approaches must be used due to the near impossibility of obtaining sufficient statistics experimentally. This report describes an end-to-end analytical technique for estimating these probabilities for Enhanced Turbulence (E-Turb) Radar systems under noise-limited conditions, for a variety of aircraft types, as defined in FAA TSO-C134. This technique provides for one means, but not the only means, by which an applicant can demonstrate compliance to the FAA directed ATDS Working Group performance requirements. Turbulence hazard algorithms were developed that derived predictive estimates of aircraft hazards from basic radar observables. These algorithms were designed to prevent false turbulence indications while accurately predicting areas of elevated turbulence risks to aircraft, passengers, and crew; and were successfully flight tested on a NASA B757-200 and a Delta Air Lines B737-800. Application of this defined methodology for calculating the probability of missed and false hazard indications taking into account the effect of the various algorithms used, is demonstrated for representative transport aircraft and radar performance characteristics.

  9. Variations in intensity statistics for representational and abstract art, and for art from the Eastern and Western hemispheres.

    PubMed

    Graham, Daniel J; Field, David J

    2008-01-01

    Two recent studies suggest that natural scenes and paintings show similar statistical properties. But does the content or region of origin of an artwork affect its statistical properties? We addressed this question by having judges place paintings from a large, diverse collection of paintings into one of three subject-matter categories using a forced-choice paradigm. Basic statistics for images whose caterogization was agreed by all judges showed no significant differences between those judged to be 'landscape' and 'portrait/still-life', but these two classes differed from paintings judged to be 'abstract'. All categories showed basic spatial statistical regularities similar to those typical of natural scenes. A test of the full painting collection (140 images) with respect to the works' place of origin (provenance) showed significant differences between Eastern works and Western ones, differences which we find are likely related to the materials and the choice of background color. Although artists deviate slightly from reproducing natural statistics in abstract art (compared to representational art), the great majority of human art likely shares basic statistical limitations. We argue that statistical regularities in art are rooted in the need to make art visible to the eye, not in the inherent aesthetic value of natural-scene statistics, and we suggest that variability in spatial statistics may be generally imposed by manufacture.

  10. Basic statistics (the fundamental concepts).

    PubMed

    Lim, Eric

    2014-12-01

    An appreciation and understanding of statistics is import to all practising clinicians, not simply researchers. This is because mathematics is the fundamental basis to which we base clinical decisions, usually with reference to the benefit in relation to risk. Unless a clinician has a basic understanding of statistics, he or she will never be in a position to question healthcare management decisions that have been handed down from generation to generation, will not be able to conduct research effectively nor evaluate the validity of published evidence (usually making an assumption that most published work is either all good or all bad). This article provides a brief introduction to basic statistical methods and illustrates its use in common clinical scenarios. In addition, pitfalls of incorrect usage have been highlighted. However, it is not meant to be a substitute for formal training or consultation with a qualified and experienced medical statistician prior to starting any research project.

  11. A Monte Carlo Simulation Study of the Reliability of Intraindividual Variability

    PubMed Central

    Estabrook, Ryne; Grimm, Kevin J.; Bowles, Ryan P.

    2012-01-01

    Recent research has seen intraindividual variability (IIV) become a useful technique to incorporate trial-to-trial variability into many types of psychological studies. IIV as measured by individual standard deviations (ISDs) has shown unique prediction to several types of positive and negative outcomes (Ram, Rabbit, Stollery, & Nesselroade, 2005). One unanswered question regarding measuring intraindividual variability is its reliability and the conditions under which optimal reliability is achieved. Monte Carlo simulation studies were conducted to determine the reliability of the ISD compared to the intraindividual mean. The results indicate that ISDs generally have poor reliability and are sensitive to insufficient measurement occasions, poor test reliability, and unfavorable amounts and distributions of variability in the population. Secondary analysis of psychological data shows that use of individual standard deviations in unfavorable conditions leads to a marked reduction in statistical power, although careful adherence to underlying statistical assumptions allows their use as a basic research tool. PMID:22268793

  12. [Projection of prisoner numbers].

    PubMed

    Metz, Rainer; Sohn, Werner

    2015-01-01

    The past and future development of occupancy rates in prisons is of crucial importance for the judicial administration of every country. Basic factors for planning the required penal facilities are seasonal fluctuations, minimum, maximum and average occupancy as well as the present situation and potential development of certain imprisonment categories. As the prisoner number of a country is determined by a complex set of interdependent conditions, it has turned out to be difficult to provide any theoretical explanations. The idea accepted in criminology for a long time that prisoner numbers are interdependent with criminal policy must be regarded as having failed. Statistical and time series analyses may help, however, to identify the factors having influenced the development of prisoner numbers in the past. The analyses presented here, first describe such influencing factors from a criminological perspective and then deal with their statistical identification and modelling. Using the development of prisoner numbers in Hesse as an example, it has been found that modelling methods in which the independent variables predict the dependent variable with a time lag are particularly helpful. A potential complication is, however, that for predicting the number of prisoners the different dynamics in German and foreign prisoners require the development of further models.

  13. Numerical investigation of kinetic turbulence in relativistic pair plasmas - I. Turbulence statistics

    NASA Astrophysics Data System (ADS)

    Zhdankin, Vladimir; Uzdensky, Dmitri A.; Werner, Gregory R.; Begelman, Mitchell C.

    2018-02-01

    We describe results from particle-in-cell simulations of driven turbulence in collisionless, magnetized, relativistic pair plasma. This physical regime provides a simple setting for investigating the basic properties of kinetic turbulence and is relevant for high-energy astrophysical systems such as pulsar wind nebulae and astrophysical jets. In this paper, we investigate the statistics of turbulent fluctuations in simulations on lattices of up to 10243 cells and containing up to 2 × 1011 particles. Due to the absence of a cooling mechanism in our simulations, turbulent energy dissipation reduces the magnetization parameter to order unity within a few dynamical times, causing turbulent motions to become sub-relativistic. In the developed stage, our results agree with predictions from magnetohydrodynamic turbulence phenomenology at inertial-range scales, including a power-law magnetic energy spectrum with index near -5/3, scale-dependent anisotropy of fluctuations described by critical balance, lognormal distributions for particle density and internal energy density (related by a 4/3 adiabatic index, as predicted for an ultra-relativistic ideal gas), and the presence of intermittency. We also present possible signatures of a kinetic cascade by measuring power-law spectra for the magnetic, electric and density fluctuations at sub-Larmor scales.

  14. The Virtual Quake Earthquake Simulator: Earthquake Probability Statistics for the El Mayor-Cucapah Region and Evidence of Predictability in Simulated Earthquake Sequences

    NASA Astrophysics Data System (ADS)

    Schultz, K.; Yoder, M. R.; Heien, E. M.; Rundle, J. B.; Turcotte, D. L.; Parker, J. W.; Donnellan, A.

    2015-12-01

    We introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert based forecasting metric similar to those presented in Keilis-Borok (2002); Molchan (1997), and show that it exhibits significant information gain compared to random forecasts. We also discuss the long standing question of activation vs quiescent type earthquake triggering. We show that VQ exhibits both behaviors separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California USA and northern Baja California Norte, Mexico.

  15. Fieldcrest Cannon, Inc. Advanced Technical Preparation. Statistical Process Control (SPC). PRE-SPC I. Instructor Book.

    ERIC Educational Resources Information Center

    Averitt, Sallie D.

    This instructor guide, which was developed for use in a manufacturing firm's advanced technical preparation program, contains the materials required to present a learning module that is designed to prepare trainees for the program's statistical process control module by improving their basic math skills and instructing them in basic calculator…

  16. Social Physique Anxiety and Intention to Be Physically Active: A Self-Determination Theory Approach.

    PubMed

    Sicilia, Álvaro; Sáenz-Alvarez, Piedad; González-Cutre, David; Ferriz, Roberto

    2016-12-01

    Based on self-determination theory, the purpose of this study was to analyze the relationship between social physique anxiety and intention to be physically active, while taking into account the mediating effects of the basic psychological needs and behavioral regulations in exercise. Having obtained parents' prior consent, 390 students in secondary school (218 boys, 172 girls; M age  = 15.10 years, SD = 1.94 years) completed a self-administered questionnaire during physical education class that assessed the target variables. Preliminary analyses included means, standard deviations, and bivariate correlations among the target variables. Next, a path analysis was performed using the maximum likelihood estimation method with the bootstrapping procedure in the statistical package AMOS 19. Analysis revealed that social physique anxiety negatively predicted intention to be physically active through mediation of the basic psychological needs and the 3 autonomous forms of motivation (i.e., intrinsic motivation, integrated regulation, and identified regulation). The results suggest that social physique anxiety is an internal source of controlling influence that hinders basic psychological need satisfaction and autonomous motivation in exercise, and interventions aimed at reducing social physique anxiety could promote future exercise.

  17. Development and validation of a machine learning algorithm and hybrid system to predict the need for life-saving interventions in trauma patients.

    PubMed

    Liu, Nehemiah T; Holcomb, John B; Wade, Charles E; Batchinsky, Andriy I; Cancio, Leopoldo C; Darrah, Mark I; Salinas, José

    2014-02-01

    Accurate and effective diagnosis of actual injury severity can be problematic in trauma patients. Inherent physiologic compensatory mechanisms may prevent accurate diagnosis and mask true severity in many circumstances. The objective of this project was the development and validation of a multiparameter machine learning algorithm and system capable of predicting the need for life-saving interventions (LSIs) in trauma patients. Statistics based on means, slopes, and maxima of various vital sign measurements corresponding to 79 trauma patient records generated over 110,000 feature sets, which were used to develop, train, and implement the system. Comparisons among several machine learning models proved that a multilayer perceptron would best implement the algorithm in a hybrid system consisting of a machine learning component and basic detection rules. Additionally, 295,994 feature sets from 82 h of trauma patient data showed that the system can obtain 89.8 % accuracy within 5 min of recorded LSIs. Use of machine learning technologies combined with basic detection rules provides a potential approach for accurately assessing the need for LSIs in trauma patients. The performance of this system demonstrates that machine learning technology can be implemented in a real-time fashion and potentially used in a critical care environment.

  18. Mass Uncertainty and Application For Space Systems

    NASA Technical Reports Server (NTRS)

    Beech, Geoffrey

    2013-01-01

    Expected development maturity under contract (spec) should correlate with Project/Program Approved MGA Depletion Schedule in Mass Properties Control Plan. If specification NTE, MGA is inclusive of Actual MGA (A5 & A6). If specification is not an NTE Actual MGA (e.g. nominal), then MGA values are reduced by A5 values and A5 is representative of remaining uncertainty. Basic Mass = Engineering Estimate based on design and construction principles with NO embedded margin MGA Mass = Basic Mass * assessed % from approved MGA schedule. Predicted Mass = Basic + MGA. Aggregate MGA % = (Aggregate Predicted - Aggregate Basic) /Aggregate Basic.

  19. Contrasting effects of feature-based statistics on the categorisation and identification of visual objects

    PubMed Central

    Taylor, Kirsten I.; Devereux, Barry J.; Acres, Kadia; Randall, Billi; Tyler, Lorraine K.

    2013-01-01

    Conceptual representations are at the heart of our mental lives, involved in every aspect of cognitive functioning. Despite their centrality, a long-standing debate persists as to how the meanings of concepts are represented and processed. Many accounts agree that the meanings of concrete concepts are represented by their individual features, but disagree about the importance of different feature-based variables: some views stress the importance of the information carried by distinctive features in conceptual processing, others the features which are shared over many concepts, and still others the extent to which features co-occur. We suggest that previously disparate theoretical positions and experimental findings can be unified by an account which claims that task demands determine how concepts are processed in addition to the effects of feature distinctiveness and co-occurrence. We tested these predictions in a basic-level naming task which relies on distinctive feature information (Experiment 1) and a domain decision task which relies on shared feature information (Experiment 2). Both used large-scale regression designs with the same visual objects, and mixed-effects models incorporating participant, session, stimulus-related and feature statistic variables to model the performance. We found that concepts with relatively more distinctive and more highly correlated distinctive relative to shared features facilitated basic-level naming latencies, while concepts with relatively more shared and more highly correlated shared relative to distinctive features speeded domain decisions. These findings demonstrate that the feature statistics of distinctiveness (shared vs. distinctive) and correlational strength, as well as the task demands, determine how concept meaning is processed in the conceptual system. PMID:22137770

  20. Statistical methods of estimating mining costs

    USGS Publications Warehouse

    Long, K.R.

    2011-01-01

    Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.

  1. Descriptive data analysis.

    PubMed

    Thompson, Cheryl Bagley

    2009-01-01

    This 13th article of the Basics of Research series is first in a short series on statistical analysis. These articles will discuss creating your statistical analysis plan, levels of measurement, descriptive statistics, probability theory, inferential statistics, and general considerations for interpretation of the results of a statistical analysis.

  2. Regression: The Apple Does Not Fall Far From the Tree.

    PubMed

    Vetter, Thomas R; Schober, Patrick

    2018-05-15

    Researchers and clinicians are frequently interested in either: (1) assessing whether there is a relationship or association between 2 or more variables and quantifying this association; or (2) determining whether 1 or more variables can predict another variable. The strength of such an association is mainly described by the correlation. However, regression analysis and regression models can be used not only to identify whether there is a significant relationship or association between variables but also to generate estimations of such a predictive relationship between variables. This basic statistical tutorial discusses the fundamental concepts and techniques related to the most common types of regression analysis and modeling, including simple linear regression, multiple regression, logistic regression, ordinal regression, and Poisson regression, as well as the common yet often underrecognized phenomenon of regression toward the mean. The various types of regression analysis are powerful statistical techniques, which when appropriately applied, can allow for the valid interpretation of complex, multifactorial data. Regression analysis and models can assess whether there is a relationship or association between 2 or more observed variables and estimate the strength of this association, as well as determine whether 1 or more variables can predict another variable. Regression is thus being applied more commonly in anesthesia, perioperative, critical care, and pain research. However, it is crucial to note that regression can identify plausible risk factors; it does not prove causation (a definitive cause and effect relationship). The results of a regression analysis instead identify independent (predictor) variable(s) associated with the dependent (outcome) variable. As with other statistical methods, applying regression requires that certain assumptions be met, which can be tested with specific diagnostics.

  3. The Prediction Properties of Inverse and Reverse Regression for the Simple Linear Calibration Problem

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.

    2010-01-01

    The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.

  4. Alterations in choice behavior by manipulations of world model.

    PubMed

    Green, C S; Benson, C; Kersten, D; Schrater, P

    2010-09-14

    How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) "probability matching"-a consistent example of suboptimal choice behavior seen in humans-occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.

  5. Alterations in choice behavior by manipulations of world model

    PubMed Central

    Green, C. S.; Benson, C.; Kersten, D.; Schrater, P.

    2010-01-01

    How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) “probability matching”—a consistent example of suboptimal choice behavior seen in humans—occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning. PMID:20805507

  6. Cancer Pharmacogenomics: Integrating Discoveries in Basic, Clinical and Population Sciences to Advance Predictive Cancer Care

    Cancer.gov

    Cancer Pharmacogenomics: Integrating Discoveries in Basic, Clinical and Population Sciences to Advance Predictive Cancer Care, a 2010 workshop sponsored by the Epidemiology and Genomics Research Program.

  7. Development of a funding, cost, and spending model for satellite projects

    NASA Technical Reports Server (NTRS)

    Johnson, Jesse P.

    1989-01-01

    The need for a predictive budget/funging model is obvious. The current models used by the Resource Analysis Office (RAO) are used to predict the total costs of satellite projects. An effort to extend the modeling capabilities from total budget analysis to total budget and budget outlays over time analysis was conducted. A statistical based and data driven methodology was used to derive and develop the model. Th budget data for the last 18 GSFC-sponsored satellite projects were analyzed and used to build a funding model which would describe the historical spending patterns. This raw data consisted of dollars spent in that specific year and their 1989 dollar equivalent. This data was converted to the standard format used by the RAO group and placed in a database. A simple statistical analysis was performed to calculate the gross statistics associated with project length and project cost ant the conditional statistics on project length and project cost. The modeling approach used is derived form the theory of embedded statistics which states that properly analyzed data will produce the underlying generating function. The process of funding large scale projects over extended periods of time is described by Life Cycle Cost Models (LCCM). The data was analyzed to find a model in the generic form of a LCCM. The model developed is based on a Weibull function whose parameters are found by both nonlinear optimization and nonlinear regression. In order to use this model it is necessary to transform the problem from a dollar/time space to a percentage of total budget/time space. This transformation is equivalent to moving to a probability space. By using the basic rules of probability, the validity of both the optimization and the regression steps are insured. This statistically significant model is then integrated and inverted. The resulting output represents a project schedule which relates the amount of money spent to the percentage of project completion.

  8. A Multidisciplinary Approach for Teaching Statistics and Probability

    ERIC Educational Resources Information Center

    Rao, C. Radhakrishna

    1971-01-01

    The author presents a syllabus for an introductory (first year after high school) course in statistics and probability and some methods of teaching statistical techniques. The description comes basically from the procedures used at the Indian Statistical Institute, Calcutta. (JG)

  9. Applications of statistics to medical science (1) Fundamental concepts.

    PubMed

    Watanabe, Hiroshi

    2011-01-01

    The conceptual framework of statistical tests and statistical inferences are discussed, and the epidemiological background of statistics is briefly reviewed. This study is one of a series in which we survey the basics of statistics and practical methods used in medical statistics. Arguments related to actual statistical analysis procedures will be made in subsequent papers.

  10. The Predictive Power of Electronic Polarizability for Tailoring the Refractivity of High Index Glasses Optical Basicity Versus the Single Oscillator Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCloy, John S.; Riley, Brian J.; Johnson, Bradley R.

    Four compositions of high density (~8 g/cm3) heavy metal oxide glasses composed of PbO, Bi2O3, and Ga2O3 were produced and refractivity parameters (refractive index and density) were computed and measured. Optical basicity was computed using three different models – average electronegativity, ionic-covalent parameter, and energy gap – and the basicity results were used to compute oxygen polarizability and subsequently refractive index. Refractive indices were measured in the visible and infrared at 0.633 μm, 1.55 μm, 3.39 μm, 5.35 μm, 9.29 μm, and 10.59 μm using a unique prism coupler setup, and data were fitted to the Sellmeier expression to obtainmore » an equation of the dispersion of refractive index with wavelength. Using this dispersion relation, single oscillator energy, dispersion energy, and lattice energy were determined. Oscillator parameters were also calculated for the various glasses from their oxide values as an additional means of predicting index. Calculated dispersion parameters from oxides underestimate the index by 3 to 4%. Predicted glass index from optical basicity, based on component oxide energy gaps, underpredicts the index at 0.633 μm by only 2%, while other basicity scales are less accurate. The predicted energy gap of the glasses based on this optical basicity overpredicts the Tauc optical gap as determined by transmission measurements by 6 to 10%. These results show that for this system, density, refractive index in the visible, and energy gap can be reasonably predicted using only composition, optical basicity values for the constituent oxides, and partial molar volume coefficients. Calculations such as these are useful for a priori prediction of optical properties of glasses.« less

  11. A Comparison of Computer-Assisted Instruction and the Traditional Method of Teaching Basic Statistics

    ERIC Educational Resources Information Center

    Ragasa, Carmelita Y.

    2008-01-01

    The objective of the study is to determine if there is a significant difference in the effects of the treatment and control groups on achievement as well as on attitude as measured by the posttest. A class of 38 sophomore college students in the basic statistics taught with the use of computer-assisted instruction and another class of 15 students…

  12. Back to basics: an introduction to statistics.

    PubMed

    Halfens, R J G; Meijers, J M M

    2013-05-01

    In the second in the series, Professor Ruud Halfens and Dr Judith Meijers give an overview of statistics, both descriptive and inferential. They describe the first principles of statistics, including some relevant inferential tests.

  13. Stata Modules for Calculating Novel Predictive Performance Indices for Logistic Models.

    PubMed

    Barkhordari, Mahnaz; Padyab, Mojgan; Hadaegh, Farzad; Azizi, Fereidoun; Bozorgmanesh, Mohammadreza

    2016-01-01

    Prediction is a fundamental part of prevention of cardiovascular diseases (CVD). The development of prediction algorithms based on the multivariate regression models loomed several decades ago. Parallel with predictive models development, biomarker researches emerged in an impressively great scale. The key question is how best to assess and quantify the improvement in risk prediction offered by new biomarkers or more basically how to assess the performance of a risk prediction model. Discrimination, calibration, and added predictive value have been recently suggested to be used while comparing the predictive performances of the predictive models' with and without novel biomarkers. Lack of user-friendly statistical software has restricted implementation of novel model assessment methods while examining novel biomarkers. We intended, thus, to develop a user-friendly software that could be used by researchers with few programming skills. We have written a Stata command that is intended to help researchers obtain cut point-free and cut point-based net reclassification improvement index and (NRI) and relative and absolute Integrated discriminatory improvement index (IDI) for logistic-based regression analyses.We applied the commands to a real data on women participating the Tehran lipid and glucose study (TLGS) to examine if information of a family history of premature CVD, waist circumference, and fasting plasma glucose can improve predictive performance of the Framingham's "general CVD risk" algorithm. The command is addpred for logistic regression models. The Stata package provided herein can encourage the use of novel methods in examining predictive capacity of ever-emerging plethora of novel biomarkers.

  14. The contribution of collective attack tactics in differentiating handball score efficiency.

    PubMed

    Rogulj, Nenad; Srhoj, Vatromir; Srhoj, Ljerka

    2004-12-01

    The prevalence of 19 elements of collective tactics in score efficient and score inefficient teams was analyzed in 90 First Croatian Handball League--Men games during the 1998-1999 season. Prediction variables were used to describe duration, continuity, system, organization and spatial direction of attacks. Analysis of the basic descriptive and distribution statistical parameters revealed normal distribution of all variables and possibility to use multivariate methods. Canonic discrimination analysis and analysis of variance showed the use of collective tactics elements on attacks to differ statistically significantly between the winning and losing teams. Counter-attacks and uninterrupted attacks predominate in winning teams. Other types of attacks such as long position attack, multiply interrupted attack, attack with one circle runner attack player/pivot, attack based on basic principles, attack based on group cooperation, attack based on independent action, attack based on group maneuvering, rightward directed attack and leftward directed attack predominate in losing teams. Winning teams were found to be clearly characterized by quick attacks against unorganized defense, whereas prolonged, interrupted position attacks against organized defense along with frequent and diverse tactical actions were characteristic of losing teams. The choice and frequency of using a particular tactical activity in position attack do not warrant score efficiency but usually are consequential to the limited anthropologic potential and low level of individual technical-tactical skills of the players in low-quality teams.

  15. The Energetic Cost of Walking: A Comparison of Predictive Methods

    PubMed Central

    Kramer, Patricia Ann; Sylvester, Adam D.

    2011-01-01

    Background The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is “best”, but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. Methodology/Principal Findings We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Conclusion Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended to other species. PMID:21731693

  16. Basic numerical competences in large-scale assessment data: Structure and long-term relevance.

    PubMed

    Hirsch, Stefa; Lambert, Katharina; Coppens, Karien; Moeller, Korbinian

    2018-03-01

    Basic numerical competences are seen as building blocks for later numerical and mathematical achievement. The current study aimed at investigating the structure of early numeracy reflected by different basic numerical competences in kindergarten and its predictive value for mathematical achievement 6 years later using data from large-scale assessment. This allowed analyses based on considerably large sample sizes (N > 1700). A confirmatory factor analysis indicated that a model differentiating five basic numerical competences at the end of kindergarten fitted the data better than a one-factor model of early numeracy representing a comprehensive number sense. In addition, these basic numerical competences were observed to reliably predict performance in a curricular mathematics test in Grade 6 even after controlling for influences of general cognitive ability. Thus, our results indicated a differentiated view on early numeracy considering basic numerical competences in kindergarten reflected in large-scale assessment data. Consideration of different basic numerical competences allows for evaluating their specific predictive value for later mathematical achievement but also mathematical learning difficulties. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Auto-assembly of nanometer thick, water soluble layers of plasmid DNA complexed with diamines and basic amino acids on graphite: Greatest DNA protection is obtained with arginine.

    PubMed

    Khalil, T T; Boulanouar, O; Heintz, O; Fromm, M

    2017-02-01

    We have investigated the ability of diamines as well as basic amino acids to condense DNA onto highly ordered pyrolytic graphite with minimum damage after re-dissolution in water. Based on a bibliographic survey we briefly summarize DNA binding properties with diamines as compared to basic amino acids. Thus, solutions of DNA complexed with these linkers were drop-cast in order to deposit ultra-thin layers on the surface of HOPG in the absence or presence of Tris buffer. Atomic Force Microscopy analyses showed that, at a fixed ligand-DNA mixing ratio of 16, the mean thickness of the layers can be statistically predicted to lie in the range 0-50nm with a maximum standard deviation ±6nm, using a simple linear law depending on the DNA concentration. The morphology of the layers appears to be ligand-dependent. While the layers containing diamines present holes, those formed in the presence of basic amino acids, except for lysine, are much more compact and dense. X-ray Photoelectron Spectroscopy measurements provide compositional information indicating that, compared to the maximum number of DNA sites to which the ligands may bind, the basic amino acids Arg and His are present in large excess. Conservation of the supercoiled topology of the DNA plasmids was studied after recovery of the complex layers in water. Remarkably, arginine has the best protection capabilities whether Tris was present or not in the initial solution. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Using predictive analytics and big data to optimize pharmaceutical outcomes.

    PubMed

    Hernandez, Inmaculada; Zhang, Yuting

    2017-09-15

    The steps involved, the resources needed, and the challenges associated with applying predictive analytics in healthcare are described, with a review of successful applications of predictive analytics in implementing population health management interventions that target medication-related patient outcomes. In healthcare, the term big data typically refers to large quantities of electronic health record, administrative claims, and clinical trial data as well as data collected from smartphone applications, wearable devices, social media, and personal genomics services; predictive analytics refers to innovative methods of analysis developed to overcome challenges associated with big data, including a variety of statistical techniques ranging from predictive modeling to machine learning to data mining. Predictive analytics using big data have been applied successfully in several areas of medication management, such as in the identification of complex patients or those at highest risk for medication noncompliance or adverse effects. Because predictive analytics can be used in predicting different outcomes, they can provide pharmacists with a better understanding of the risks for specific medication-related problems that each patient faces. This information will enable pharmacists to deliver interventions tailored to patients' needs. In order to take full advantage of these benefits, however, clinicians will have to understand the basics of big data and predictive analytics. Predictive analytics that leverage big data will become an indispensable tool for clinicians in mapping interventions and improving patient outcomes. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  19. Do Basic Skills Predict Youth Unemployment (16- to 24-Year-Olds) Also when Controlled for Accomplished Upper-Secondary School? A Cross-Country Comparison

    ERIC Educational Resources Information Center

    Lundetrae, Kjersti; Gabrielsen, Egil; Mykletun, Reidar

    2010-01-01

    Basic skills and educational level are closely related, and both might affect employment. Data from the Adult Literacy and Life Skills Survey were used to examine whether basic skills in terms of literacy and numeracy predicted youth unemployment (16-24 years) while controlling for educational level. Stepwise logistic regression showed that in…

  20. Is infant-directed speech interesting because it is surprising? - Linking properties of IDS to statistical learning and attention at the prosodic level.

    PubMed

    Räsänen, Okko; Kakouros, Sofoklis; Soderstrom, Melanie

    2018-06-06

    The exaggerated intonation and special rhythmic properties of infant-directed speech (IDS) have been hypothesized to attract infants' attention to the speech stream. However, there has been little work actually connecting the properties of IDS to models of attentional processing or perceptual learning. A number of such attention models suggest that surprising or novel perceptual inputs attract attention, where novelty can be operationalized as the statistical (un)predictability of the stimulus in the given context. Since prosodic patterns such as F0 contours are accessible to young infants who are also known to be adept statistical learners, the present paper investigates a hypothesis that F0 contours in IDS are less predictable than those in adult-directed speech (ADS), given previous exposure to both speaking styles, thereby potentially tapping into basic attentional mechanisms of the listeners in a similar manner that relative probabilities of other linguistic patterns are known to modulate attentional processing in infants and adults. Computational modeling analyses with naturalistic IDS and ADS speech from matched speakers and contexts show that IDS intonation has lower overall temporal predictability even when the F0 contours of both speaking styles are normalized to have equal means and variances. A closer analysis reveals that there is a tendency of IDS intonation to be less predictable at the end of short utterances, whereas ADS exhibits more stable average predictability patterns across the full extent of the utterances. The difference between IDS and ADS persists even when the proportion of IDS and ADS exposure is varied substantially, simulating different relative amounts of IDS heard in different family and cultural environments. Exposure to IDS is also found to be more efficient for predicting ADS intonation contours in new utterances than exposure to the equal amount of ADS speech. This indicates that the more variable prosodic contours of IDS also generalize to ADS, and may therefore enhance prosodic learning in infancy. Overall, the study suggests that one reason behind infant preference for IDS could be its higher information value at the prosodic level, as measured by the amount of surprisal in the F0 contours. This provides the first formal link between the properties of IDS and the models of attentional processing and statistical learning in the brain. However, this finding does not rule out the possibility that other differences between the IDS and ADS also play a role. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Bayesian models: A statistical primer for ecologists

    USGS Publications Warehouse

    Hobbs, N. Thompson; Hooten, Mevin B.

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models

  2. Understanding Statistical Concepts and Terms in Context: The GovStat Ontology and the Statistical Interactive Glossary.

    ERIC Educational Resources Information Center

    Haas, Stephanie W.; Pattuelli, Maria Cristina; Brown, Ron T.

    2003-01-01

    Describes the Statistical Interactive Glossary (SIG), an enhanced glossary of statistical terms supported by the GovStat ontology of statistical concepts. Presents a conceptual framework whose components articulate different aspects of a term's basic explanation that can be manipulated to produce a variety of presentations. The overarching…

  3. Selected Streamflow Statistics and Regression Equations for Predicting Statistics at Stream Locations in Monroe County, Pennsylvania

    USGS Publications Warehouse

    Thompson, Ronald E.; Hoffman, Scott A.

    2006-01-01

    A suite of 28 streamflow statistics, ranging from extreme low to high flows, was computed for 17 continuous-record streamflow-gaging stations and predicted for 20 partial-record stations in Monroe County and contiguous counties in north-eastern Pennsylvania. The predicted statistics for the partial-record stations were based on regression analyses relating inter-mittent flow measurements made at the partial-record stations indexed to concurrent daily mean flows at continuous-record stations during base-flow conditions. The same statistics also were predicted for 134 ungaged stream locations in Monroe County on the basis of regression analyses relating the statistics to GIS-determined basin characteristics for the continuous-record station drainage areas. The prediction methodology for developing the regression equations used to estimate statistics was developed for estimating low-flow frequencies. This study and a companion study found that the methodology also has application potential for predicting intermediate- and high-flow statistics. The statistics included mean monthly flows, mean annual flow, 7-day low flows for three recurrence intervals, nine flow durations, mean annual base flow, and annual mean base flows for two recurrence intervals. Low standard errors of prediction and high coefficients of determination (R2) indicated good results in using the regression equations to predict the statistics. Regression equations for the larger flow statistics tended to have lower standard errors of prediction and higher coefficients of determination (R2) than equations for the smaller flow statistics. The report discusses the methodologies used in determining the statistics and the limitations of the statistics and the equations used to predict the statistics. Caution is indicated in using the predicted statistics for small drainage area situations. Study results constitute input needed by water-resource managers in Monroe County for planning purposes and evaluation of water-resources availability.

  4. Probability, statistics, and computational science.

    PubMed

    Beerenwinkel, Niko; Siebourg, Juliane

    2012-01-01

    In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient inference algorithms and methods for learning these models from partially observed data. Several simple examples are given throughout the text, some of which point to models that are discussed in more detail in subsequent chapters.

  5. Earthquake Hazard Assessment: an Independent Review

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir

    2016-04-01

    Seismic hazard assessment (SHA), from term-less (probabilistic PSHA or deterministic DSHA) to time-dependent (t-DASH) including short-term earthquake forecast/prediction (StEF), is not an easy task that implies a delicate application of statistics to data of limited size and different accuracy. Regretfully, in many cases of SHA, t-DASH, and StEF, the claims of a high potential and efficiency of the methodology are based on a flawed application of statistics and hardly suitable for communication to decision makers. The necessity and possibility of applying the modified tools of Earthquake Prediction Strategies, in particular, the Error Diagram, introduced by G.M. Molchan in early 1990ies for evaluation of SHA, and the Seismic Roulette null-hypothesis as a measure of the alerted space, is evident, and such a testing must be done in advance claiming hazardous areas and/or times. The set of errors, i.e. the rates of failure and of the alerted space-time volume, compared to those obtained in the same number of random guess trials permits evaluating the SHA method effectiveness and determining the optimal choice of the parameters in regard to specified cost-benefit functions. These and other information obtained in such a testing may supply us with a realistic estimate of confidence in SHA results and related recommendations on the level of risks for decision making in regard to engineering design, insurance, and emergency management. These basics of SHA evaluation are exemplified with a few cases of misleading "seismic hazard maps", "precursors", and "forecast/prediction methods".

  6. Neural activity during natural viewing of Sesame Street statistically predicts test scores in early childhood.

    PubMed

    Cantlon, Jessica F; Li, Rosa

    2013-01-01

    It is not currently possible to measure the real-world thought process that a child has while observing an actual school lesson. However, if it could be done, children's neural processes would presumably be predictive of what they know. Such neural measures would shed new light on children's real-world thought. Toward that goal, this study examines neural processes that are evoked naturalistically, during educational television viewing. Children and adults all watched the same Sesame Street video during functional magnetic resonance imaging (fMRI). Whole-brain intersubject correlations between the neural timeseries from each child and a group of adults were used to derive maps of "neural maturity" for children. Neural maturity in the intraparietal sulcus (IPS), a region with a known role in basic numerical cognition, predicted children's formal mathematics abilities. In contrast, neural maturity in Broca's area correlated with children's verbal abilities, consistent with prior language research. Our data show that children's neural responses while watching complex real-world stimuli predict their cognitive abilities in a content-specific manner. This more ecologically natural paradigm, combined with the novel measure of "neural maturity," provides a new method for studying real-world mathematics development in the brain.

  7. DroSpeGe: rapid access database for new Drosophila species genomes.

    PubMed

    Gilbert, Donald G

    2007-01-01

    The Drosophila species comparative genome database DroSpeGe (http://insects.eugenes.org/DroSpeGe/) provides genome researchers with rapid, usable access to 12 new and old Drosophila genomes, since its inception in 2004. Scientists can use, with minimal computing expertise, the wealth of new genome information for developing new insights into insect evolution. New genome assemblies provided by several sequencing centers have been annotated with known model organism gene homologies and gene predictions to provided basic comparative data. TeraGrid supplies the shared cyberinfrastructure for the primary computations. This genome database includes homologies to Drosophila melanogaster and eight other eukaryote model genomes, and gene predictions from several groups. BLAST searches of the newest assemblies are integrated with genome maps. GBrowse maps provide detailed views of cross-species aligned genomes. BioMart provides for data mining of annotations and sequences. Common chromosome maps identify major synteny among species. Potential gain and loss of genes is suggested by Gene Ontology groupings for genes of the new species. Summaries of essential genome statistics include sizes, genes found and predicted, homology among genomes, phylogenetic trees of species and comparisons of several gene predictions for sensitivity and specificity in finding new and known genes.

  8. Comments of statistical issue in numerical modeling for underground nuclear test monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, W.L.; Anderson, K.K.

    1993-03-01

    The Symposium concluded with prepared summaries by four experts in the involved disciplines. These experts made no mention of statistics and/or the statistical content of issues. The first author contributed an extemporaneous statement at the Symposium because there are important issues associated with conducting and evaluating numerical modeling that are familiar to statisticians and often treated successfully by them. This note expands upon these extemporaneous remarks. Statistical ideas may be helpful in resolving some numerical modeling issues. Specifically, we comment first on the role of statistical design/analysis in the quantification process to answer the question ``what do we know aboutmore » the numerical modeling of underground nuclear tests?`` and second on the peculiar nature of uncertainty analysis for situations involving numerical modeling. The simulations described in the workshop, though associated with topic areas, were basically sets of examples. Each simulation was tuned towards agreeing with either empirical evidence or an expert`s opinion of what empirical evidence would be. While the discussions were reasonable, whether the embellishments were correct or a forced fitting of reality is unclear and illustrates that ``simulation is easy.`` We also suggest that these examples of simulation are typical and the questions concerning the legitimacy and the role of knowing the reality are fair, in general, with respect to simulation. The answers will help us understand why ``prediction is difficult.``« less

  9. Granularity refined by knowledge: contingency tables and rough sets as tools of discovery

    NASA Astrophysics Data System (ADS)

    Zytkow, Jan M.

    2000-04-01

    Contingency tables represent data in a granular way and are a well-established tool for inductive generalization of knowledge from data. We show that the basic concepts of rough sets, such as concept approximation, indiscernibility, and reduct can be expressed in the language of contingency tables. We further demonstrate the relevance to rough sets theory of additional probabilistic information available in contingency tables and in particular of statistical tests of significance and predictive strength applied to contingency tables. Tests of both type can help the evaluation mechanisms used in inductive generalization based on rough sets. Granularity of attributes can be improved in feedback with knowledge discovered in data. We demonstrate how 49er's facilities for (1) contingency table refinement, for (2) column and row grouping based on correspondence analysis, and (3) the search for equivalence relations between attributes improve both granularization of attributes and the quality of knowledge. Finally we demonstrate the limitations of knowledge viewed as concept approximation, which is the focus of rough sets. Transcending that focus and reorienting towards the predictive knowledge and towards the related distinction between possible and impossible (or statistically improbable) situations will be very useful in expanding the rough sets approach to more expressive forms of knowledge.

  10. Fish: A New Computer Program for Friendly Introductory Statistics Help

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Raffle, Holly

    2005-01-01

    All introductory statistics students must master certain basic descriptive statistics, including means, standard deviations and correlations. Students must also gain insight into such complex concepts as the central limit theorem and standard error. This article introduces and describes the Friendly Introductory Statistics Help (FISH) computer…

  11. Assessment and prediction of inter-joint upper limb movement correlations based on kinematic analysis and statistical regression

    NASA Astrophysics Data System (ADS)

    Toth-Tascau, Mirela; Balanean, Flavia; Krepelka, Mircea

    2013-10-01

    Musculoskeletal impairment of the upper limb can cause difficulties in performing basic daily activities. Three dimensional motion analyses can provide valuable data of arm movement in order to precisely determine arm movement and inter-joint coordination. The purpose of this study was to develop a method to evaluate the degree of impairment based on the influence of shoulder movements in the amplitude of elbow flexion and extension based on the assumption that a lack of motion of the elbow joint will be compensated by an increased shoulder activity. In order to develop and validate a statistical model, one healthy young volunteer has been involved in the study. The activity of choice simulated blowing the nose, starting from a slight flexion of the elbow and raising the hand until the middle finger touches the tip of the nose and return to the start position. Inter-joint coordination between the elbow and shoulder movements showed significant correlation. Statistical regression was used to fit an equation model describing the influence of shoulder movements on the elbow mobility. The study provides a brief description of the kinematic analysis protocol and statistical models that may be useful in describing the relation between inter-joint movements of daily activities.

  12. Coevolution at protein complex interfaces can be detected by the complementarity trace with important impact for predictive docking

    PubMed Central

    Madaoui, Hocine; Guerois, Raphaël

    2008-01-01

    Protein surfaces are under significant selection pressure to maintain interactions with their partners throughout evolution. Capturing how selection pressure acts at the interfaces of protein–protein complexes is a fundamental issue with high interest for the structural prediction of macromolecular assemblies. We tackled this issue under the assumption that, throughout evolution, mutations should minimally disrupt the physicochemical compatibility between specific clusters of interacting residues. This constraint drove the development of the so-called Surface COmplementarity Trace in Complex History score (SCOTCH), which was found to discriminate with high efficiency the structure of biological complexes. SCOTCH performances were assessed not only with respect to other evolution-based approaches, such as conservation and coevolution analyses, but also with respect to statistically based scoring methods. Validated on a set of 129 complexes of known structure exhibiting both permanent and transient intermolecular interactions, SCOTCH appears as a robust strategy to guide the prediction of protein–protein complex structures. Of particular interest, it also provides a basic framework to efficiently track how protein surfaces could evolve while keeping their partners in contact. PMID:18511568

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hudson, W.G.

    Scapteriscus vicinus is the most important pest of turf and pasture grasses in Florida. This study develops a method of correlating sample results with true population density and provides the first quantitative information on spatial distribution and movement patterns of mole crickets. Three basic techniques for sampling mole crickets were compared: soil flushes, soil corer, and pitfall trapping. No statistical difference was found between the soil corer and soil flushing. Soil flushing was shown to be more sensitive to changes in population density than pitfall trapping. No technique was effective for sampling adults. Regression analysis provided a means of adjustingmore » for the effects of soil moisture and showed soil temperature to be unimportant in predicting efficiency of flush sampling. Cesium-137 was used to label females for subsequent location underground. Comparison of mean distance to nearest neighbor with the distance predicted by a random distribution model showed that the observed distance in the spring was significantly greater than hypothesized (Student's T-test, p < 0.05). Fall adult nearest neighbor distance was not different than predicted by the random distribution hypothesis.« less

  14. The Acquired Preparedness Model of Risk for Bulimic Symptom Development

    PubMed Central

    Combs, Jessica L.; Smith, Gregory T.; Flory, Kate; Simmons, Jean R.; Hill, Kelly K.

    2010-01-01

    The authors applied person-environment transaction theory to test the acquired preparedness model of eating disorder risk. The model holds that (a) middle school girls high in the trait of ineffectiveness are differentially prepared to acquire high risk expectancies for reinforcement from dieting/thinness; (b) those expectancies predict subsequent binge eating and purging; and (c) the influence of the disposition of ineffectiveness on binge eating and purging is mediated by dieting/thinness expectancies. In a three-wave longitudinal study of 394 middle school girls, they found support for the model. Seventh grade girls’ scores on ineffectiveness predicted their subsequent endorsement of high risk dieting/thinness expectancies, which in turn predicted subsequent increases in binge eating and purging. Statistical tests of mediation supported the hypothesis that the prospective relation between ineffectiveness and binge eating was mediated by dieting/thinness expectancies, as was the prospective relation between ineffectiveness and purging. This application of a basic science theory to eating disorder risk appears fruitful, and the findings suggest the importance of early interventions that address both disposition and learning. PMID:20853933

  15. CADDIS Volume 4. Data Analysis: Basic Principles & Issues

    EPA Pesticide Factsheets

    Use of inferential statistics in causal analysis, introduction to data independence and autocorrelation, methods to identifying and control for confounding variables, references for the Basic Principles section of Data Analysis.

  16. Random Fields

    NASA Astrophysics Data System (ADS)

    Vanmarcke, Erik

    1983-03-01

    Random variation over space and time is one of the few attributes that might safely be predicted as characterizing almost any given complex system. Random fields or "distributed disorder systems" confront astronomers, physicists, geologists, meteorologists, biologists, and other natural scientists. They appear in the artifacts developed by electrical, mechanical, civil, and other engineers. They even underlie the processes of social and economic change. The purpose of this book is to bring together existing and new methodologies of random field theory and indicate how they can be applied to these diverse areas where a "deterministic treatment is inefficient and conventional statistics insufficient." Many new results and methods are included. After outlining the extent and characteristics of the random field approach, the book reviews the classical theory of multidimensional random processes and introduces basic probability concepts and methods in the random field context. It next gives a concise amount of the second-order analysis of homogeneous random fields, in both the space-time domain and the wave number-frequency domain. This is followed by a chapter on spectral moments and related measures of disorder and on level excursions and extremes of Gaussian and related random fields. After developing a new framework of analysis based on local averages of one-, two-, and n-dimensional processes, the book concludes with a chapter discussing ramifications in the important areas of estimation, prediction, and control. The mathematical prerequisite has been held to basic college-level calculus.

  17. Are We Able to Pass the Mission of Statistics to Students?

    ERIC Educational Resources Information Center

    Hindls, Richard; Hronová, Stanislava

    2015-01-01

    The article illustrates our long term experience in teaching statistics for non-statisticians, especially for students of economics and humanities. The article is focused on some problems of the basic course that can weaken the interest in statistics or lead to false use of statistic methods.

  18. Solar Activity Heading for a Maunder Minimum?

    NASA Astrophysics Data System (ADS)

    Schatten, K. H.; Tobiska, W. K.

    2003-05-01

    Long-range (few years to decades) solar activity prediction techniques vary greatly in their methods. They range from examining planetary orbits, to spectral analyses (e.g. Fourier, wavelet and spectral analyses), to artificial intelligence methods, to simply using general statistical techniques. Rather than concentrate on statistical/mathematical/numerical methods, we discuss a class of methods which appears to have a "physical basis." Not only does it have a physical basis, but this basis is rooted in both "basic" physics (dynamo theory), but also solar physics (Babcock dynamo theory). The class we discuss is referred to as "precursor methods," originally developed by Ohl, Brown and Williams and others, using geomagnetic observations. My colleagues and I have developed some understanding for how these methods work and have expanded the prediction methods using "solar dynamo precursor" methods, notably a "SODA" index (SOlar Dynamo Amplitude). These methods are now based upon an understanding of the Sun's dynamo processes- to explain a connection between how the Sun's fields are generated and how the Sun broadcasts its future activity levels to Earth. This has led to better monitoring of the Sun's dynamo fields and is leading to more accurate prediction techniques. Related to the Sun's polar and toroidal magnetic fields, we explain how these methods work, past predictions, the current cycle, and predictions of future of solar activity levels for the next few solar cycles. The surprising result of these long-range predictions is a rapid decline in solar activity, starting with cycle #24. If this trend continues, we may see the Sun heading towards a "Maunder" type of solar activity minimum - an extensive period of reduced levels of solar activity. For the solar physicists, who enjoy studying solar activity, we hope this isn't so, but for NASA, which must place and maintain satellites in low earth orbit (LEO), it may help with reboost problems. Space debris, and other aspects of objects in LEO will also be affected. This research is supported by the NSF and NASA.

  19. A reciprocal effects model of the temporal ordering of basic psychological needs and motivation.

    PubMed

    Martinent, Guillaume; Guillet-Descas, Emma; Moiret, Sophie

    2015-04-01

    Using self-determination theory as the framework, we examined the temporal ordering between satisfaction and thwarting of basic psychological needs and motivation. We accomplished this goal by using a two-wave 7-month partial least squares path modeling approach (PLS-PM) among a sample of 94 adolescent athletes (Mage = 15.96) in an intensive training setting. The PLS-PM results showed significant paths leading: (a) from T1 satisfaction of basic psychological need for competence to T2 identified regulation, (b) from T1 external regulation to T2 thwarting and satisfaction of basic psychological need for competence, and (c) from T1 amotivation to T2 satisfaction of basic psychological need for relatedness. Overall, our results suggest that the relationship between basic psychological need and motivation varied depending on the type of basic need and motivation assessed. Basic psychological need for competence predicted identified regulation over time whereas amotivation and external regulation predicted basic psychological need for relatedness or competence over time.

  20. [Effects of Self-directed Feedback Practice using Smartphone Videos on Basic Nursing Skills, Confidence in Performance and Learning Satisfaction].

    PubMed

    Lee, Seul Gi; Shin, Yun Hee

    2016-04-01

    This study was done to verify effects of a self-directed feedback practice using smartphone videos on nursing students' basic nursing skills, confidence in performance and learning satisfaction. In this study an experimental study with a post-test only control group design was used. Twenty-nine students were assigned to the experimental group and 29 to the control group. Experimental treatment was exchanging feedback on deficiencies through smartphone recorded videos of nursing practice process taken by peers during self-directed practice. Basic nursing skills scores were higher for all items in the experimental group compared to the control group, and differences were statistically significant ["Measuring vital signs" (t=-2.10, p=.039); "Wearing protective equipment when entering and exiting the quarantine room and the management of waste materials" (t=-4.74, p<.001) "Gavage tube feeding" (t=-2.70, p=.009)]. Confidence in performance was higher in the experimental group compared to the control group, but the differences were not statistically significant. However, after the complete practice, there was a statistically significant difference in overall performance confidence (t=-3.07. p=.003). Learning satisfaction was higher in the experimental group compared to the control group, but the difference was not statistically significant (t=-1.67, p=.100). Results of this study indicate that self-directed feedback practice using smartphone videos can improve basic nursing skills. The significance is that it can help nursing students gain confidence in their nursing skills for the future through improvement of basic nursing skills and performance of quality care, thus providing patients with safer care.

  1. A hybrid approach to advancing quantitative prediction of tissue distribution of basic drugs in human

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poulin, Patrick, E-mail: patrick-poulin@videotron.ca; Ekins, Sean; Department of Pharmaceutical Sciences, School of Pharmacy, University of Maryland, 20 Penn Street, Baltimore, MD 21201

    A general toxicity of basic drugs is related to phospholipidosis in tissues. Therefore, it is essential to predict the tissue distribution of basic drugs to facilitate an initial estimate of that toxicity. The objective of the present study was to further assess the original prediction method that consisted of using the binding to red blood cells measured in vitro for the unbound drug (RBCu) as a surrogate for tissue distribution, by correlating it to unbound tissue:plasma partition coefficients (Kpu) of several tissues, and finally to predict volume of distribution at steady-state (V{sub ss}) in humans under in vivo conditions. Thismore » correlation method demonstrated inaccurate predictions of V{sub ss} for particular basic drugs that did not follow the original correlation principle. Therefore, the novelty of this study is to provide clarity on the actual hypotheses to identify i) the impact of pharmacological mode of action on the generic correlation of RBCu-Kpu, ii) additional mechanisms of tissue distribution for the outlier drugs, iii) molecular features and properties that differentiate compounds as outliers in the original correlation analysis in order to facilitate its applicability domain alongside the properties already used so far, and finally iv) to present a novel and refined correlation method that is superior to what has been previously published for the prediction of human V{sub ss} of basic drugs. Applying a refined correlation method after identifying outliers would facilitate the prediction of more accurate distribution parameters as key inputs used in physiologically based pharmacokinetic (PBPK) and phospholipidosis models.« less

  2. Provision of Pre-Primary Education as a Basic Right in Tanzania: Reflections from Policy Documents

    ERIC Educational Resources Information Center

    Mtahabwa, Lyabwene

    2010-01-01

    This study sought to assess provision of pre-primary education in Tanzania as a basic right through analyses of relevant policy documents. Documents which were published over the past decade were considered, including educational policies, action plans, national papers, the "Basic Education Statistics in Tanzania" documents, strategy…

  3. Stata Modules for Calculating Novel Predictive Performance Indices for Logistic Models

    PubMed Central

    Barkhordari, Mahnaz; Padyab, Mojgan; Hadaegh, Farzad; Azizi, Fereidoun; Bozorgmanesh, Mohammadreza

    2016-01-01

    Background Prediction is a fundamental part of prevention of cardiovascular diseases (CVD). The development of prediction algorithms based on the multivariate regression models loomed several decades ago. Parallel with predictive models development, biomarker researches emerged in an impressively great scale. The key question is how best to assess and quantify the improvement in risk prediction offered by new biomarkers or more basically how to assess the performance of a risk prediction model. Discrimination, calibration, and added predictive value have been recently suggested to be used while comparing the predictive performances of the predictive models’ with and without novel biomarkers. Objectives Lack of user-friendly statistical software has restricted implementation of novel model assessment methods while examining novel biomarkers. We intended, thus, to develop a user-friendly software that could be used by researchers with few programming skills. Materials and Methods We have written a Stata command that is intended to help researchers obtain cut point-free and cut point-based net reclassification improvement index and (NRI) and relative and absolute Integrated discriminatory improvement index (IDI) for logistic-based regression analyses.We applied the commands to a real data on women participating the Tehran lipid and glucose study (TLGS) to examine if information of a family history of premature CVD, waist circumference, and fasting plasma glucose can improve predictive performance of the Framingham’s “general CVD risk” algorithm. Results The command is addpred for logistic regression models. Conclusions The Stata package provided herein can encourage the use of novel methods in examining predictive capacity of ever-emerging plethora of novel biomarkers. PMID:27279830

  4. Earthquake prediction analysis based on empirical seismic rate: the M8 algorithm

    NASA Astrophysics Data System (ADS)

    Molchan, G.; Romashkova, L.

    2010-12-01

    The quality of space-time earthquake prediction is usually characterized by a 2-D error diagram (n, τ), where n is the fraction of failures-to-predict and τ is the local rate of alarm averaged in space. The most reasonable averaging measure for analysis of a prediction strategy is the normalized rate of target events λ(dg) in a subarea dg. In that case the quantity H = 1 - (n + τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n, τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M >= 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The model of λ(dg) based on the events Mw >= 5.5, 1977-2004, and the magnitude range of target events 8.0 <= M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm.

  5. A crash course on data analysis in asteroseismology

    NASA Astrophysics Data System (ADS)

    Appourchaux, Thierry

    2014-02-01

    In this course, I try to provide a few basics required for performing data analysis in asteroseismology. First, I address how one can properly treat times series: the sampling, the filtering effect, the use of Fourier transform, the associated statistics. Second, I address how one can apply statistics for decision making and for parameter estimation either in a frequentist of a Bayesian framework. Last, I review how these basic principle have been applied (or not) in asteroseismology.

  6. On Ruch's Principle of Decreasing Mixing Distance in classical statistical physics

    NASA Astrophysics Data System (ADS)

    Busch, Paul; Quadt, Ralf

    1990-10-01

    Ruch's Principle of Decreasing Mixing Distance is reviewed as a statistical physical principle and its basic suport and geometric interpretation, the Ruch-Schranner-Seligman theorem, is generalized to be applicable to a large representative class of classical statistical systems.

  7. Kappa Distribution in a Homogeneous Medium: Adiabatic Limit of a Super-diffusive Process?

    NASA Astrophysics Data System (ADS)

    Roth, I.

    2015-12-01

    The classical statistical theory predicts that an ergodic, weakly interacting system like charged particles in the presence of electromagnetic fields, performing Brownian motions (characterized by small range deviations in phase space and short-term microscopic memory), converges into the Gibbs-Boltzmann statistics. Observation of distributions with a kappa-power-law tails in homogeneous systems contradicts this prediction and necessitates a renewed analysis of the basic axioms of the diffusion process: characteristics of the transition probability density function (pdf) for a single interaction, with a possibility of non-Markovian process and non-local interaction. The non-local, Levy walk deviation is related to the non-extensive statistical framework. Particles bouncing along (solar) magnetic field with evolving pitch angles, phases and velocities, as they interact resonantly with waves, undergo energy changes at undetermined time intervals, satisfying these postulates. The dynamic evolution of a general continuous time random walk is determined by pdf of jumps and waiting times resulting in a fractional Fokker-Planck equation with non-integer derivatives whose solution is given by a Fox H-function. The resulting procedure involves the known, although not frequently used in physics fractional calculus, while the local, Markovian process recasts the evolution into the standard Fokker-Planck equation. Solution of the fractional Fokker-Planck equation with the help of Mellin transform and evaluation of its residues at the poles of its Gamma functions results in a slowly converging sum with power laws. It is suggested that these tails form the Kappa function. Gradual vs impulsive solar electron distributions serve as prototypes of this description.

  8. Center for Prostate Disease Research

    MedlinePlus

    ... 2017 Cancer Statistics programs Clinical Research Program Synopsis Leadership Multi-Disciplinary Clinic Staff Listing 2017 Cancer Statistics Basic Science Research Program Synopsis Leadership Gene Expression Data Research Achievements Staff Listing Lab ...

  9. Basic Aerospace Education Library

    ERIC Educational Resources Information Center

    Journal of Aerospace Education, 1975

    1975-01-01

    Lists the most significant resource items on aerospace education which are presently available. Includes source books, bibliographies, directories, encyclopedias, dictionaries, audiovisuals, curriculum/planning guides, aerospace statistics, aerospace education statistics and newsletters. (BR)

  10. Multiple-solution problems in a statistics classroom: an example

    NASA Astrophysics Data System (ADS)

    Chu, Chi Wing; Chan, Kevin L. T.; Chan, Wai-Sum; Kwong, Koon-Shing

    2017-11-01

    The mathematics education literature shows that encouraging students to develop multiple solutions for given problems has a positive effect on students' understanding and creativity. In this paper, we present an example of multiple-solution problems in statistics involving a set of non-traditional dice. In particular, we consider the exact probability mass distribution for the sum of face values. Four different ways of solving the problem are discussed. The solutions span various basic concepts in different mathematical disciplines (sample space in probability theory, the probability generating function in statistics, integer partition in basic combinatorics and individual risk model in actuarial science) and thus promotes upper undergraduate students' awareness of knowledge connections between their courses. All solutions of the example are implemented using the R statistical software package.

  11. Insight into others' minds: spatio-temporal representations by intrinsic frame of reference.

    PubMed

    Sun, Yanlong; Wang, Hongbin

    2014-01-01

    Recent research has seen a growing interest in connections between domains of spatial and social cognition. Much evidence indicates that processes of representing space in distinct frames of reference (FOR) contribute to basic spatial abilities as well as sophisticated social abilities such as tracking other's intention and belief. Argument remains, however, that belief reasoning in social domain requires an innately dedicated system and cannot be reduced to low-level encoding of spatial relationships. Here we offer an integrated account advocating the critical roles of spatial representations in intrinsic frame of reference. By re-examining the results from a spatial task (Tamborello etal., 2012) and a false-belief task (Onishi and Baillargeon, 2005), we argue that spatial and social abilities share a common origin at the level of spatio-temporal association and predictive learning, where multiple FOR-based representations provide the basic building blocks for efficient and flexible partitioning of the environmental statistics. We also discuss neuroscience evidence supporting these mechanisms. We conclude that FOR-based representations may bridge the conceptual as well as the implementation gaps between the burgeoning fields of social and spatial cognition.

  12. THE DISCOUNTED REPRODUCTIVE NUMBER FOR EPIDEMIOLOGY

    PubMed Central

    Reluga, Timothy C.; Medlock, Jan; Galvani, Alison

    2013-01-01

    The basic reproductive number, , and the effective reproductive number, , are commonly used in mathematical epidemiology as summary statistics for the size and controllability of epidemics. However, these commonly used reproductive numbers can be misleading when applied to predict pathogen evolution because they do not incorporate the impact of the timing of events in the life-history cycle of the pathogen. To study evolution problems where the host population size is changing, measures like the ultimate proliferation rate must be used. A third measure of reproductive success, which combines properties of both the basic reproductive number and the ultimate proliferation rate, is the discounted reproductive number . The discounted reproductive number is a measure of reproductive success that is an individual’s expected lifetime offspring production discounted by the background population growth rate. Here, we draw attention to the discounted reproductive number by providing an explicit definition and a systematic application framework. We describe how the discounted reproductive number overcomes the limitations of both the standard reproductive numbers and proliferation rates, and show that is closely connected to Fisher’s reproductive values for different life-history stages PMID:19364158

  13. The Statistical Power of Planned Comparisons.

    ERIC Educational Resources Information Center

    Benton, Roberta L.

    Basic principles underlying statistical power are examined; and issues pertaining to effect size, sample size, error variance, and significance level are highlighted via the use of specific hypothetical examples. Analysis of variance (ANOVA) and related methods remain popular, although other procedures sometimes have more statistical power against…

  14. Regional Earthquake Likelihood Models: A realm on shaky grounds?

    NASA Astrophysics Data System (ADS)

    Kossobokov, V.

    2005-12-01

    Seismology is juvenile and its appropriate statistical tools to-date may have a "medievil flavor" for those who hurry up to apply a fuzzy language of a highly developed probability theory. To become "quantitatively probabilistic" earthquake forecasts/predictions must be defined with a scientific accuracy. Following the most popular objectivists' viewpoint on probability, we cannot claim "probabilities" adequate without a long series of "yes/no" forecast/prediction outcomes. Without "antiquated binary language" of "yes/no" certainty we cannot judge an outcome ("success/failure"), and, therefore, quantify objectively a forecast/prediction method performance. Likelihood scoring is one of the delicate tools of Statistics, which could be worthless or even misleading when inappropriate probability models are used. This is a basic loophole for a misuse of likelihood as well as other statistical methods on practice. The flaw could be avoided by an accurate verification of generic probability models on the empirical data. It is not an easy task in the frames of the Regional Earthquake Likelihood Models (RELM) methodology, which neither defines the forecast precision nor allows a means to judge the ultimate success or failure in specific cases. Hopefully, the RELM group realizes the problem and its members do their best to close the hole with an adequate, data supported choice. Regretfully, this is not the case with the erroneous choice of Gerstenberger et al., who started the public web site with forecasts of expected ground shaking for `tomorrow' (Nature 435, 19 May 2005). Gerstenberger et al. have inverted the critical evidence of their study, i.e., the 15 years of recent seismic record accumulated just in one figure, which suggests rejecting with confidence above 97% "the generic California clustering model" used in automatic calculations. As a result, since the date of publication in Nature the United States Geological Survey website delivers to the public, emergency planners and the media, a forecast product, which is based on wrong assumptions that violate the best-documented earthquake statistics in California, which accuracy was not investigated, and which forecasts were not tested in a rigorous way.

  15. Statistical Issues for Uncontrolled Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2008-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper looks at a number of these theoretical assumptions, examining the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. In addition, this paper will also outline some new tools for assessing ground hazard risk in useful ways. Also, this study is able to make use of a database of known uncontrolled reentry locations measured by the United States Department of Defense. By using data from objects that were in orbit more than 30 days before reentry, sufficient time is allowed for the orbital parameters to be randomized in the way the models are designed to compute. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors - including the effects of gravitational harmonics, the effects of the Earth's equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and change the ground footprints. The measured latitude and longitude distributions of these objects provide data that can be directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.

  16. LFSTAT - Low-Flow Analysis in R

    NASA Astrophysics Data System (ADS)

    Koffler, Daniel; Laaha, Gregor

    2013-04-01

    The calculation of characteristic stream flow during dry conditions is a basic requirement for many problems in hydrology, ecohydrology and water resources management. As opposed to floods, a number of different indices are used to characterise low flows and streamflow droughts. Although these indices and methods of calculation have been well documented in the WMO Manual on Low-flow Estimation and Prediction [1], a comprehensive software was missing which enables a fast and standardized calculation of low flow statistics. We present the new software package lfstat to fill in this obvious gap. Our software package is based on the statistical open source software R, and expands it to analyse daily stream flow data records focusing on low-flows. As command-line based programs are not everyone's preference, we also offer a plug-in for the R-Commander, an easy to use graphical user interface (GUI) provided for R which is based on tcl/tk. The functionality of lfstat includes estimation methods for low-flow indices, extreme value statistics, deficit characteristics, and additional graphical methods to control the computation of complex indices and to illustrate the data. Beside the basic low flow indices, the baseflow index and recession constants can be computed. For extreme value statistics, state-of-the-art methods for L-moment based local and regional frequency analysis (RFA) are available. The tools for deficit characteristics include various pooling and threshold selection methods to support the calculation of drought duration and deficit indices. The most common graphics for low flow analysis are available, and the plots can be modified according to the user preferences. Graphics include hydrographs for different periods, flexible streamflow deficit plots, baseflow visualisation, recession diagnostic, flow duration curves as well as double mass curves, and many more. From a technical point of view, the package uses a S3-class called lfobj (low-flow objects). This objects are usual R-data-frames including date, flow, hydrological year and possibly baseflow information. Once these objects are created, analysis can be performed by mouse-click and a script can be saved to make the analysis easily reproducible. At the moment we are offering implementation of all major methods proposed in the WMO manual on Low-flow Estimation and Predictions [1]. Future plans include a dynamic low flow report in odt-file format using odf-weave which allows automatic updates if data or analysis change. We hope to offer a tool to ease and structure the analysis of stream flow data focusing on low-flows and to make analysis transparent and communicable. The package can also be used in teaching students the first steps in low-flow hydrology. The software packages can be installed from CRAN (latest stable) and R-Forge: http://r-forge.r-project.org (development version). References: [1] Gustard, Alan; Demuth, Siegfried, (eds.) Manual on Low-flow Estimation and Prediction. Geneva, Switzerland, World Meteorological Organization, (Operational Hydrology Report No. 50, WMO-No. 1029).

  17. Subcellular localization for Gram positive and Gram negative bacterial proteins using linear interpolation smoothing model.

    PubMed

    Saini, Harsh; Raicar, Gaurav; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok

    2015-12-07

    Protein subcellular localization is an important topic in proteomics since it is related to a protein׳s overall function, helps in the understanding of metabolic pathways, and in drug design and discovery. In this paper, a basic approximation technique from natural language processing called the linear interpolation smoothing model is applied for predicting protein subcellular localizations. The proposed approach extracts features from syntactical information in protein sequences to build probabilistic profiles using dependency models, which are used in linear interpolation to determine how likely is a sequence to belong to a particular subcellular location. This technique builds a statistical model based on maximum likelihood. It is able to deal effectively with high dimensionality that hinders other traditional classifiers such as Support Vector Machines or k-Nearest Neighbours without sacrificing performance. This approach has been evaluated by predicting subcellular localizations of Gram positive and Gram negative bacterial proteins. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Data mining: childhood injury control and beyond.

    PubMed

    Tepas, Joseph J

    2009-08-01

    Data mining is defined as the automatic extraction of useful, often previously unknown information from large databases or data sets. It has become a major part of modern life and is extensively used in industry, banking, government, and health care delivery. The process requires a data collection system that integrates input from multiple sources containing critical elements that define outcomes of interest. Appropriately designed data mining processes identify and adjust for confounding variables. The statistical modeling used to manipulate accumulated data may involve any number of techniques. As predicted results are periodically analyzed against those observed, the model is consistently refined to optimize precision and accuracy. Whether applying integrated sources of clinical data to inferential probabilistic prediction of risk of ventilator-associated pneumonia or population surveillance for signs of bioterrorism, it is essential that modern health care providers have at least a rudimentary understanding of what the concept means, how it basically works, and what it means to current and future health care.

  19. Atlas of Seasonal Means Simulated by the NSIPP 1 Atmospheric GCM. Volume 17

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Bacmeister, Julio; Pegion, Philip J.; Schubert, Siegfried D.; Busalacchi, Antonio J. (Technical Monitor)

    2000-01-01

    This atlas documents the climate characteristics of version 1 of the NASA Seasonal-to-Interannual Prediction Project (NSIPP) Atmospheric General Circulation Model (AGCM). The AGCM includes an interactive land model (the Mosaic scheme), and is part of the NSIPP coupled atmosphere-land-ocean model. The results presented here are based on a 20-year (December 1979-November 1999) "ANIIP-style" integration of the AGCM in which the monthly-mean sea-surface temperature and sea ice are specified from observations. The climate characteristics of the AGCM are compared with the National Centers for Environmental Prediction (NCEP) and the European Center for Medium-Range Weather Forecasting (ECMWF) reanalyses. Other verification data include Special Sensor Microwave/Imager (SSNM) total precipitable water, the Xie-Arkin estimates of precipitation, and Earth Radiation Budget Experiment (ERBE) measurements of short and long wave radiation. The atlas is organized by season. The basic quantities include seasonal mean global maps and zonal and vertical averages of circulation, variance/covariance statistics, and selected physics quantities.

  20. Network theory inspired analysis of time-resolved expression data reveals key players guiding P. patens stem cell development.

    PubMed

    Busch, Hauke; Boerries, Melanie; Bao, Jie; Hanke, Sebastian T; Hiss, Manuel; Tiko, Theodhor; Rensing, Stefan A

    2013-01-01

    Transcription factors (TFs) often trigger developmental decisions, yet, their transcripts are often only moderately regulated and thus not easily detected by conventional statistics on expression data. Here we present a method that allows to determine such genes based on trajectory analysis of time-resolved transcriptome data. As a proof of principle, we have analysed apical stem cells of filamentous moss (P. patens) protonemata that develop from leaflets upon their detachment from the plant. By our novel correlation analysis of the post detachment transcriptome kinetics we predict five out of 1,058 TFs to be involved in the signaling leading to the establishment of pluripotency. Among the predicted regulators is the basic helix loop helix TF PpRSL1, which we show to be involved in the establishment of apical stem cells in P. patens. Our methodology is expected to aid analysis of key players of developmental decisions in complex plant and animal systems.

  1. Specific personality traits and general personality dysfunction as predictors of the presence and severity of personality disorders in a clinical sample.

    PubMed

    Berghuis, Han; Kamphuis, Jan H; Verheul, Roel

    2014-01-01

    This study examined the associations of specific personality traits and general personality dysfunction in relation to the presence and severity of Diagnostic and Statistical Manual of Mental Disorders (4th ed. [DSM-IV]; American Psychiatric Association, 1994) personality disorders in a Dutch clinical sample. Two widely used measures of specific personality traits were selected, the Revised NEO Personality Inventory as a measure of normal personality traits, and the Dimensional Assessment of Personality Pathology-Basic Questionnaire as a measure of pathological traits. In addition, 2 promising measures of personality dysfunction were selected, the General Assessment of Personality Disorder and the Severity Indices of Personality Problems. Theoretically predicted associations were found between the measures, and all measures predicted the presence and severity of DSM-IV personality disorders. The combination of general personality dysfunction models and personality traits models provided incremental information about the presence and severity of personality disorders, suggesting that an integrative approach of multiple perspectives might serve comprehensive assessment of personality disorders.

  2. Quark-Gluon Plasma

    NASA Astrophysics Data System (ADS)

    Sinha, Bikash; Pal, Santanu; Raha, Sibaji

    Quark-Gluon Plasma (QGP) is a state of matter predicted by the theory of strong interactions - Quantum Chromodynamics (QCD). The area of QGP lies at the interface of particle physics, field theory, nuclear physics and many-body theory, statistical physics, cosmology and astrophysics. In its brief history (about a decade), QGP has seen a rapid convergence of ideas from these previously diverging disciplines. This volume includes the lectures delivered by eminent specialists to students without prior experience in QGP. Each course thus starts from the basics and takes the students by steps to the current problems. The chapters are self-contained and pedagogic in style. The book may therefore serve as an introduction for advanced graduate students intending to enter this field or for physicists working in other areas. Experts in QGP may also find this volume a handy reference. Specific examples, used to elucidate how theoretical predictions and experimentally accessible quantities may not always correspond to one another, make this book ideal for self-study for beginners. This feature will also make the volume thought-provoking for QGP practitioners.

  3. 29 CFR 1904.42 - Requests from the Bureau of Labor Statistics for data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Requests from the Bureau of Labor Statistics for data. 1904... Statistics for data. (a) Basic requirement. If you receive a Survey of Occupational Injuries and Illnesses Form from the Bureau of Labor Statistics (BLS), or a BLS designee, you must promptly complete the form...

  4. 78 FR 34101 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-06

    ... and basic descriptive statistics on the quantity and type of consumer-reported patient safety events... conduct correlations, cross tabulations of responses and other statistical analysis. Estimated Annual...

  5. Reply to “Statistical evaluation of the VAN Method using the historic earthquake catalog in Greece,” by Richard L. Aceves, Stephen K. Park and David J. Strauss

    NASA Astrophysics Data System (ADS)

    Varotsos, P.; Lazaridou, M.

    The pioneering calculation by Aceves et al. [1996] shed light on the main question of this debate, i.e., on whether “VAN predictions can be ascribed to chance.” Aceves et al. [1996] conclude that “the VAN method has resulted in a significantly higher prediction rate than randomly sampling a PDF (probability density function) map generated from a 25 year history of earthquakes.” After investigating the totality of VAN predictions issued during the period 1987-1989, Aceves et al. [1996] found: “The prediction rate for the VAN method clearly exceeds that from the random model at all time lags between 5-22 days. At a 5 day time lag, the VAN prediction rate of 35.7% has a P-value of less than 0.06%. This means that a random model does as well as does the VAN method less than 0.06% of the time. At 22 days, the prediction rate of 67.9% has a P-value of less than 0.07%.” These conclusions basically coincide with those of Hamada [1993] although Aceves et al. [1996] followed different procedures. They are also in fundamental agreement with the results of Honkura and Tanaka [1996]. Another important conclusion of Aceves et al. [1996] is that, after declustering the earthquake catalog and prediction list from aftershocks, “VAN method is still formally significant.”

  6. Responsiveness and predictive validity of the tablet-based symbol digit modalities test in patients with stroke.

    PubMed

    Hsiao, Pei-Chi; Yu, Wan-Hui; Lee, Shih-Chieh; Chen, Mei-Hsiang; Hsieh, Ching-Lin

    2018-06-14

    The responsiveness and predictive validity of the Tablet-based Symbol Digit Modalities Test (T-SDMT) are unknown, which limits the utility of the T-SDMT in both clinical and research settings. The purpose of this study was to examine the responsiveness and predictive validity of the T-SDMT in inpatients with stroke. A follow-up, repeated-assessments design. One rehabilitation unit at a local medical center. A total of 50 inpatients receiving rehabilitation completed T-SDMT assessments at admission to and discharge from a rehabilitation ward. The median follow-up period was 14 days. The Barthel index (BI) was assessed at discharge and was used as the criterion of the predictive validity. The mean changes in the T-SDMT scores between admission and discharge were statistically significant (paired t-test = 3.46, p = 0.001). The T-SDMT scores showed a nearly moderate standardized response mean (0.49). A moderate association (Pearson's r = 0.47) was found between the scores of the T-SDMT at admission and those of the BI at discharge, indicating good predictive validity of the T-SDMT. Our results support the responsiveness and predictive validity of the T-SDMT in patients with stroke receiving rehabilitation in hospitals. This study provides empirical evidence supporting the use of the T-SDMT as an outcome measure for assessing processingspeed in inpatients with stroke. The scores of the T-SDMT could be used to predict basic activities of daily living function in inpatients with stroke.

  7. Master-Leader-Slave Cuckoo Search with Parameter Control for ANN Optimization and Its Real-World Application to Water Quality Prediction

    PubMed Central

    Jaddi, Najmeh Sadat; Abdullah, Salwani; Abdul Malek, Marlinda

    2017-01-01

    Artificial neural networks (ANNs) have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS) algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results. PMID:28125609

  8. Data Mining Approaches for Genomic Biomarker Development: Applications Using Drug Screening Data from the Cancer Genome Project and the Cancer Cell Line Encyclopedia.

    PubMed

    Covell, David G

    2015-01-01

    Developing reliable biomarkers of tumor cell drug sensitivity and resistance can guide hypothesis-driven basic science research and influence pre-therapy clinical decisions. A popular strategy for developing biomarkers uses characterizations of human tumor samples against a range of cancer drug responses that correlate with genomic change; developed largely from the efforts of the Cancer Cell Line Encyclopedia (CCLE) and Sanger Cancer Genome Project (CGP). The purpose of this study is to provide an independent analysis of this data that aims to vet existing and add novel perspectives to biomarker discoveries and applications. Existing and alternative data mining and statistical methods will be used to a) evaluate drug responses of compounds with similar mechanism of action (MOA), b) examine measures of gene expression (GE), copy number (CN) and mutation status (MUT) biomarkers, combined with gene set enrichment analysis (GSEA), for hypothesizing biological processes important for drug response, c) conduct global comparisons of GE, CN and MUT as biomarkers across all drugs screened in the CGP dataset, and d) assess the positive predictive power of CGP-derived GE biomarkers as predictors of drug response in CCLE tumor cells. The perspectives derived from individual and global examinations of GEs, MUTs and CNs confirm existing and reveal unique and shared roles for these biomarkers in tumor cell drug sensitivity and resistance. Applications of CGP-derived genomic biomarkers to predict the drug response of CCLE tumor cells finds a highly significant ROC, with a positive predictive power of 0.78. The results of this study expand the available data mining and analysis methods for genomic biomarker development and provide additional support for using biomarkers to guide hypothesis-driven basic science research and pre-therapy clinical decisions.

  9. Master-Leader-Slave Cuckoo Search with Parameter Control for ANN Optimization and Its Real-World Application to Water Quality Prediction.

    PubMed

    Jaddi, Najmeh Sadat; Abdullah, Salwani; Abdul Malek, Marlinda

    2017-01-01

    Artificial neural networks (ANNs) have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS) algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results.

  10. Understanding Summary Statistics and Graphical Techniques to Compare Michael Jordan versus LeBron James

    ERIC Educational Resources Information Center

    Williams, Immanuel James; Williams, Kelley Kim

    2016-01-01

    Understanding summary statistics and graphical techniques are building blocks to comprehending concepts beyond basic statistics. It's known that motivated students perform better in school. Using examples that students find engaging allows them to understand the concepts at a deeper level.

  11. External validation of the PROFUND index in polypathological patients from internal medicine and acute geriatrics departments in Aragón.

    PubMed

    Díez-Manglano, Jesús; Cabrerizo García, José Luis; García-Arilla Calvo, Ernesto; Jimeno Saínz, Araceli; Calvo Beguería, Eva; Martínez-Álvarez, Rosa M; Bejarano Tello, Esperanza; Caudevilla Martínez, Aránzazu

    2015-12-01

    The objective of the study was to validate externally and prospectively the PROFUND index to predict survival of polypathological patients after a year. An observational, prospective and multicenter study was performed. Polypathological patients admitted to an internal medicine or geriatrics department and attended by investigators consecutively between March 1 and June 30, 2011 were included. Data concerning age, gender, comorbidity, Barthel and Lawton-Brody indexes, Pfeiffer questionnaire, socio-familial Gijon scale, delirium, number of drugs and number of admissions during the previous year were gathered for each patient. The PROFUND index was calculated. The follow-up lasted 1 year. A Cox proportional regression model was calculated, and was used to analyze the association of the variables to mortality and C-statistic. 465 polypathological patients, 333 from internal medicine and 132 from geriatrics, were included. One-year mortality is associated with age [hazard ratio (HR) 1.52 95 % CI 1.04-2.12; p = 0.01], presence of neoplasia [HR 2.68 95 % CI 1.71-4.18; p = 0.0001] and dependence for basic activities of daily living [HR 2.34 95 % CI 1.61-3.40; p = 0.0009]. In predicting mortality, the PROFUND index shows good discrimination in patients from internal medicine (C-statistics 0.725 95 % CI 0.670-0.781), but a poor one in those from geriatrics (0.546 95 % CI 0.448-0.644). The PROFUND index is a reliable tool for predicting mortality in internal medicine PP patients.

  12. Empirically Derived Personality Subtyping for Predicting Clinical Symptoms and Treatment Response in Bulimia Nervosa

    PubMed Central

    Haynos, Ann F.; Pearson, Carolyn M.; Utzinger, Linsey M.; Wonderlich, Stephen A.; Crosby, Ross D.; Mitchell, James E.; Crow, Scott J.; Peterson, Carol B.

    2016-01-01

    Objective Evidence suggests that eating disorder subtypes reflecting under-controlled, over-controlled, and low psychopathology personality traits constitute reliable phenotypes that differentiate treatment response. This study is the first to use statistical analyses to identify these subtypes within treatment-seeking individuals with bulimia nervosa (BN) and to use these statistically derived clusters to predict clinical outcomes. Methods Using variables from the Dimensional Assessment of Personality Pathology–Basic Questionnaire, K-means cluster analyses identified under-controlled, over-controlled, and low psychopathology subtypes within BN patients (n = 80) enrolled in a treatment trial. Generalized linear models examined the impact of personality subtypes on Eating Disorder Examination global score, binge eating frequency, and purging frequency cross-sectionally at baseline and longitudinally at end of treatment (EOT) and follow-up. In the longitudinal models, secondary analyses were conducted to examine personality subtype as a potential moderator of response to Cognitive Behavioral Therapy-Enhanced (CBT-E) or Integrative Cognitive-Affective Therapy for BN (ICAT-BN). Results There were no baseline clinical differences between groups. In the longitudinal models, personality subtype predicted binge eating (p = .03) and purging (p = .01) frequency at EOT and binge eating frequency at follow-up (p = .045). The over-controlled group demonstrated the best outcomes on these variables. In secondary analyses, there was a treatment by subtype interaction for purging at follow-up (p = .04), which indicated a superiority of CBT-E over ICAT-BN for reducing purging among the over-controlled group. Discussion Empirically derived personality subtyping is appears to be a valid classification system with potential to guide eating disorder treatment decisions. PMID:27611235

  13. Financial statistics for public health dispensary decisions in Nigeria: insights on standard presentation typologies.

    PubMed

    Agundu, Prince Umor C

    2003-01-01

    Public health dispensaries in Nigeria in recent times have demonstrated the poise to boost corporate productivity in the new millennium and to drive the nation closer to concretising the lofty goal of health-for-all. This is very pronounced considering the face-lift giving to the physical environment, increase in the recruitment and development of professionals, and upward review of financial subventions. However, there is little or no emphasis on basic statistical appreciation/application which enhances the decision making ability of corporate executives. This study used the responses from 120 senior public health officials in Nigeria and analyzed them with chi-square statistical technique. The results established low statistical aptitude, inadequate statistical training programmes, little/no emphasis on statistical literacy compared to computer literacy, amongst others. Consequently, it was recommended that these lapses be promptly addressed to enhance official executive performance in the establishments. Basic statistical data presentation typologies have been articulated in this study to serve as first-aid instructions to the target group, as they represent the contributions of eminent scholars in this area of intellectualism.

  14. Assessing the significance of pedobarographic signals using random field theory.

    PubMed

    Pataky, Todd C

    2008-08-07

    Traditional pedobarographic statistical analyses are conducted over discrete regions. Recent studies have demonstrated that regionalization can corrupt pedobarographic field data through conflation when arbitrary dividing lines inappropriately delineate smooth field processes. An alternative is to register images such that homologous structures optimally overlap and then conduct statistical tests at each pixel to generate statistical parametric maps (SPMs). The significance of SPM processes may be assessed within the framework of random field theory (RFT). RFT is ideally suited to pedobarographic image analysis because its fundamental data unit is a lattice sampling of a smooth and continuous spatial field. To correct for the vast number of multiple comparisons inherent in such data, recent pedobarographic studies have employed a Bonferroni correction to retain a constant family-wise error rate. This approach unfortunately neglects the spatial correlation of neighbouring pixels, so provides an overly conservative (albeit valid) statistical threshold. RFT generally relaxes the threshold depending on field smoothness and on the geometry of the search area, but it also provides a framework for assigning p values to suprathreshold clusters based on their spatial extent. The current paper provides an overview of basic RFT concepts and uses simulated and experimental data to validate both RFT-relevant field smoothness estimations and RFT predictions regarding the topological characteristics of random pedobarographic fields. Finally, previously published experimental data are re-analysed using RFT inference procedures to demonstrate how RFT yields easily understandable statistical results that may be incorporated into routine clinical and laboratory analyses.

  15. Strategy for Promoting the Equitable Development of Basic Education in Underdeveloped Counties as Seen from Cili County

    ERIC Educational Resources Information Center

    Shihua, Peng; Rihui, Tan

    2009-01-01

    Employing statistical analysis, this study has made a preliminary exploration of promoting the equitable development of basic education in underdeveloped counties through the case study of Cili county. The unequally developed basic education in the county has been made clear, the reasons for the inequitable education have been analyzed, and,…

  16. Statistical uncertainty of extreme wind storms over Europe derived from a probabilistic clustering technique

    NASA Astrophysics Data System (ADS)

    Walz, Michael; Leckebusch, Gregor C.

    2016-04-01

    Extratropical wind storms pose one of the most dangerous and loss intensive natural hazards for Europe. However, due to only 50 years of high quality observational data, it is difficult to assess the statistical uncertainty of these sparse events just based on observations. Over the last decade seasonal ensemble forecasts have become indispensable in quantifying the uncertainty of weather prediction on seasonal timescales. In this study seasonal forecasts are used in a climatological context: By making use of the up to 51 ensemble members, a broad and physically consistent statistical base can be created. This base can then be used to assess the statistical uncertainty of extreme wind storm occurrence more accurately. In order to determine the statistical uncertainty of storms with different paths of progression, a probabilistic clustering approach using regression mixture models is used to objectively assign storm tracks (either based on core pressure or on extreme wind speeds) to different clusters. The advantage of this technique is that the entire lifetime of a storm is considered for the clustering algorithm. Quadratic curves are found to describe the storm tracks most accurately. Three main clusters (diagonal, horizontal or vertical progression of the storm track) can be identified, each of which have their own particulate features. Basic storm features like average velocity and duration are calculated and compared for each cluster. The main benefit of this clustering technique, however, is to evaluate if the clusters show different degrees of uncertainty, e.g. more (less) spread for tracks approaching Europe horizontally (diagonally). This statistical uncertainty is compared for different seasonal forecast products.

  17. A Computational Model for Predicting RNase H Domain of Retrovirus.

    PubMed

    Wu, Sijia; Zhang, Xinman; Han, Jiuqiang

    2016-01-01

    RNase H (RNH) is a pivotal domain in retrovirus to cleave the DNA-RNA hybrid for continuing retroviral replication. The crucial role indicates that RNH is a promising drug target for therapeutic intervention. However, annotated RNHs in UniProtKB database have still been insufficient for a good understanding of their statistical characteristics so far. In this work, a computational RNH model was proposed to annotate new putative RNHs (np-RNHs) in the retroviruses. It basically predicts RNH domains through recognizing their start and end sites separately with SVM method. The classification accuracy rates are 100%, 99.01% and 97.52% respectively corresponding to jack-knife, 10-fold cross-validation and 5-fold cross-validation test. Subsequently, this model discovered 14,033 np-RNHs after scanning sequences without RNH annotations. All these predicted np-RNHs and annotated RNHs were employed to analyze the length, hydrophobicity and evolutionary relationship of RNH domains. They are all related to retroviral genera, which validates the classification of retroviruses to a certain degree. In the end, a software tool was designed for the application of our prediction model. The software together with datasets involved in this paper can be available for free download at https://sourceforge.net/projects/rhtool/files/?source=navbar.

  18. Healthy work revisited: do changes in time strain predict well-being?

    PubMed

    Moen, Phyllis; Kelly, Erin L; Lam, Jack

    2013-04-01

    Building on Karasek and Theorell (R. Karasek & T. Theorell, 1990, Healthy work: Stress, productivity, and the reconstruction of working life, New York, NY: Basic Books), we theorized and tested the relationship between time strain (work-time demands and control) and seven self-reported health outcomes. We drew on survey data from 550 employees fielded before and 6 months after the implementation of an organizational intervention, the results only work environment (ROWE) in a white-collar organization. Cross-sectional (wave 1) models showed psychological time demands and time control measures were related to health outcomes in expected directions. The ROWE intervention did not predict changes in psychological time demands by wave 2, but did predict increased time control (a sense of time adequacy and schedule control). Statistical models revealed increases in psychological time demands and time adequacy predicted changes in positive (energy, mastery, psychological well-being, self-assessed health) and negative (emotional exhaustion, somatic symptoms, psychological distress) outcomes in expected directions, net of job and home demands and covariates. This study demonstrates the value of including time strain in investigations of the health effects of job conditions. Results encourage longitudinal models of change in psychological time demands as well as time control, along with the development and testing of interventions aimed at reducing time strain in different populations of workers.

  19. Educating the Educator: U.S. Government Statistical Sources for Geographic Research and Teaching.

    ERIC Educational Resources Information Center

    Fryman, James F.; Wilkinson, Patrick J.

    Appropriate for college geography students and researchers, this paper briefly introduces basic federal statistical publications and corresponding finding aids. General references include "Statistical Abstract of the United States," and three complementary publications: "County and City Data Book,""State and Metropolitan Area Data Book," and…

  20. Statistical Cost Estimation in Higher Education: Some Alternatives.

    ERIC Educational Resources Information Center

    Brinkman, Paul T.; Niwa, Shelley

    Recent developments in econometrics that are relevant to the task of estimating costs in higher education are reviewed. The relative effectiveness of alternative statistical procedures for estimating costs are also tested. Statistical cost estimation involves three basic parts: a model, a data set, and an estimation procedure. Actual data are used…

  1. Statistical Significance Testing in Second Language Research: Basic Problems and Suggestions for Reform

    ERIC Educational Resources Information Center

    Norris, John M.

    2015-01-01

    Traditions of statistical significance testing in second language (L2) quantitative research are strongly entrenched in how researchers design studies, select analyses, and interpret results. However, statistical significance tests using "p" values are commonly misinterpreted by researchers, reviewers, readers, and others, leading to…

  2. Ethical Statistics and Statistical Ethics: Making an Interdisciplinary Module

    ERIC Educational Resources Information Center

    Lesser, Lawrence M.; Nordenhaug, Erik

    2004-01-01

    This article describes an innovative curriculum module the first author created on the two-way exchange between statistics and applied ethics. The module, having no particular mathematical prerequisites beyond high school algebra, is part of an undergraduate interdisciplinary ethics course which begins with a 3-week introduction to basic applied…

  3. Beyond Classical Information Theory: Advancing the Fundamentals for Improved Geophysical Prediction

    NASA Astrophysics Data System (ADS)

    Perdigão, R. A. P.; Pires, C. L.; Hall, J.; Bloeschl, G.

    2016-12-01

    Information Theory, in its original and quantum forms, has gradually made its way into various fields of science and engineering. From the very basic concepts of Information Entropy and Mutual Information to Transit Information, Interaction Information and respective partitioning into statistical synergy, redundancy and exclusivity, the overall theoretical foundations have matured as early as the mid XX century. In the Earth Sciences various interesting applications have been devised over the last few decades, such as the design of complex process networks of descriptive and/or inferential nature, wherein earth system processes are "nodes" and statistical relationships between them designed as information-theoretical "interactions". However, most applications still take the very early concepts along with their many caveats, especially in heavily non-Normal, non-linear and structurally changing scenarios. In order to overcome the traditional limitations of information theory and tackle elusive Earth System phenomena, we introduce a new suite of information dynamic methodologies towards a more physically consistent and information comprehensive framework. The methodological developments are then illustrated on a set of practical examples from geophysical fluid dynamics, where high-order nonlinear relationships elusive to the current non-linear information measures are aptly captured. In doing so, these advances increase the predictability of critical events such as the emergence of hyper-chaotic regimes in ocean-atmospheric dynamics and the occurrence of hydro-meteorological extremes.

  4. Probabilistic neural networks modeling of the 48-h LC50 acute toxicity endpoint to Daphnia magna.

    PubMed

    Niculescu, S P; Lewis, M A; Tigner, J

    2008-01-01

    Two modeling experiments based on the maximum likelihood estimation paradigm and targeting prediction of the Daphnia magna 48-h LC50 acute toxicity endpoint for both organic and inorganic compounds are reported. The resulting models computational algorithms are implemented as basic probabilistic neural networks with Gaussian kernel (statistical corrections included). The first experiment uses strictly D. magna information for 971 structures as training/learning data and the resulting model targets practical applications. The second experiment uses the same training/learning information plus additional data on another 29 compounds whose endpoint information is originating from D. pulex and Ceriodaphnia dubia. It only targets investigation of the effect of mixing strictly D. magna 48-h LC50 modeling information with small amounts of similar information estimated from related species, and this is done as part of the validation process. A complementary 81 compounds dataset (involving only strictly D. magna information) is used to perform external testing. On this external test set, the Gaussian character of the distribution of the residuals is confirmed for both models. This allows the use of traditional statistical methodology to implement computation of confidence intervals for the unknown measured values based on the models predictions. Examples are provided for the model targeting practical applications. For the same model, a comparison with other existing models targeting the same endpoint is performed.

  5. Statistics Canada's Definition and Classification of Postsecondary and Adult Education Providers in Canada. Culture, Tourism and the Centre for Education Statistics. Research Paper. Catalogue no. 81-595-M No. 071

    ERIC Educational Resources Information Center

    Orton, Larry

    2009-01-01

    This document outlines the definitions and the typology now used by Statistics Canada's Centre for Education Statistics to identify, classify and delineate the universities, colleges and other providers of postsecondary and adult education in Canada for which basic enrollments, graduates, professors and finance statistics are produced. These new…

  6. Trends in basic mathematical competencies of beginning undergraduates in Ireland, 2003-2013

    NASA Astrophysics Data System (ADS)

    Treacy, Páraic; Faulkner, Fiona

    2015-11-01

    Deficiencies in beginning undergraduate students' basic mathematical skills has been an issue of concern in higher education, particularly in the past 15 years. This issue has been tracked and analysed in a number of universities in Ireland and internationally through student scores recorded in mathematics diagnostic tests. Students beginning their science-based and technology-based undergraduate courses in the University of Limerick have had their basic mathematics skills tested without any prior warning through a 40 question diagnostic test during their initial service mathematics lecture since 1998. Data gathered through this diagnostic test have been recorded in a database kept at the university and explored to track trends in mathematical competency of these beginning undergraduates. This paper details findings surrounding an analysis of the database between 2003 and 2013, outlining changes in mathematical competencies of these beginning undergraduates in an attempt to determine reasons for such changes. The analysis found that the proportion of students tested through this diagnostic test that are predicted to be at risk of failing their service mathematics end-of-semester examinations has increased significantly between 2003 and 2013. Furthermore, when students' performance in secondary level mathematics was controlled, it was determined that the performance of beginning undergraduates in 2013 was statistically significantly below that of the performance of the beginning undergraduates recorded 10 years previously.

  7. Building Capacity for Developing Statistical Literacy in a Developing Country: Lessons Learned from an Intervention

    ERIC Educational Resources Information Center

    North, Delia; Gal, Iddo; Zewotir, Temesgen

    2014-01-01

    This paper aims to contribute to the emerging literature on capacity-building in statistics education by examining issues pertaining to the readiness of teachers in a developing country to teach basic statistical topics. The paper reflects on challenges and barriers to building statistics capacity at grass-roots level in a developing country,…

  8. Impact of structural and economic factors on hospitalization costs, inpatient mortality, and treatment type of traumatic hip fractures in Switzerland.

    PubMed

    Mehra, Tarun; Moos, Rudolf M; Seifert, Burkhardt; Bopp, Matthias; Senn, Oliver; Simmen, Hans-Peter; Neuhaus, Valentin; Ciritsis, Bernhard

    2017-12-01

    The assessment of structural and potentially economic factors determining cost, treatment type, and inpatient mortality of traumatic hip fractures are important health policy issues. We showed that insurance status and treatment in university hospitals were significantly associated with treatment type (i.e., primary hip replacement), cost, and lower inpatient mortality respectively. The purpose of this study was to determine the influence of the structural level of hospital care and patient insurance type on treatment, hospitalization cost, and inpatient mortality in cases with traumatic hip fractures in Switzerland. The Swiss national medical statistic 2011-2012 was screened for adults with hip fracture as primary diagnosis. Gender, age, insurance type, year of discharge, hospital infrastructure level, length-of-stay, case weight, reason for discharge, and all coded diagnoses and procedures were extracted. Descriptive statistics and multivariate logistic regression with treatment by primary hip replacement as well as inpatient mortality as dependent variables were performed. We obtained 24,678 inpatient case records from the medical statistic. Hospitalization costs were calculated from a second dataset, the Swiss national cost statistic (7528 cases with hip fractures, discharged in 2012). Average inpatient costs per case were the highest for discharges from university hospitals (US$21,471, SD US$17,015) and the lowest in basic coverage hospitals (US$18,291, SD US$12,635). Controlling for other variables, higher costs for hip fracture treatment at university hospitals were significant in multivariate regression (p < 0.001). University hospitals had a lower inpatient mortality rate than full and basic care providers (2.8% vs. both 4.0%); results confirmed in our multivariate logistic regression analysis (odds ratio (OR) 1.434, 95% CI 1.127-1.824 and OR 1.459, 95% confidence interval (CI) 1.139-1.870 for full and basic coverage hospitals vs. university hospitals respectively). The proportion of privately insured varied between 16.0% in university hospitals and 38.9% in specialized hospitals. Private insurance had an OR of 1.419 (95% CI 1.306-1.542) in predicting treatment of a hip fracture with primary hip replacement. The seeming importance of insurance type on hip fracture treatment and the large inequity in the distribution of privately insured between provider types would be worth a closer look by the regulatory authorities. Better outcomes, i.e., lower mortality rates for hip fracture treatment in hospitals with a higher structural care level advocate centralization of care.

  9. [The Basic-Symptom Concept and its Influence on Current International Research on the Prediction of Psychoses].

    PubMed

    Schultze-Lutter, F

    2016-12-01

    The early detection of psychoses has become increasingly relevant in research and clinic. Next to the ultra-high risk (UHR) approach that targets an immediate risk of developing frank psychosis, the basic symptom approach that targets the earliest possible detection of the developing disorder is being increasingly used worldwide. The present review gives an introduction to the development and basic assumptions of the basic symptom concept, summarizes the results of studies on the specificity of basic symptoms for psychoses in different age groups as well as on studies of their psychosis-predictive value, and gives an outlook on future results. Moreover, a brief introduction to first recent imaging studies is given that supports one of the main assumptions of the basic symptom concept, i. e., that basic symptoms are the most immediate phenomenological expression of the cerebral aberrations underlying the development of psychosis. From this, it is concluded that basic symptoms might be able to provide important information on future neurobiological research on the etiopathology of psychoses. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Avoid violence, rioting, and outrage; approach celebration, delight, and strength: Using large text corpora to compute valence, arousal, and the basic emotions.

    PubMed

    Westbury, Chris; Keith, Jeff; Briesemeister, Benny B; Hofmann, Markus J; Jacobs, Arthur M

    2015-01-01

    Ever since Aristotle discussed the issue in Book II of his Rhetoric, humans have attempted to identify a set of "basic emotion labels". In this paper we propose an algorithmic method for evaluating sets of basic emotion labels that relies upon computed co-occurrence distances between words in a 12.7-billion-word corpus of unselected text from USENET discussion groups. Our method uses the relationship between human arousal and valence ratings collected for a large list of words, and the co-occurrence similarity between each word and emotion labels. We assess how well the words in each of 12 emotion label sets-proposed by various researchers over the past 118 years-predict the arousal and valence ratings on a test and validation dataset, each consisting of over 5970 items. We also assess how well these emotion labels predict lexical decision residuals (LDRTs), after co-varying out the effects attributable to basic lexical predictors. We then demonstrate a generalization of our method to determine the most predictive "basic" emotion labels from among all of the putative models of basic emotion that we considered. As well as contributing empirical data towards the development of a more rigorous definition of basic emotions, our method makes it possible to derive principled computational estimates of emotionality-specifically, of arousal and valence-for all words in the language.

  11. Multimachine data–based prediction of high-frequency sensor signal noise for resistive wall mode control in ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yueqiang; Sabbagh, S. A.; Chapman, I. T.

    The high-frequency noise measured by magnetic sensors, at levels above the typical frequency of resistive wall modes, is analyzed across a range of present tokamak devices including DIII-D, JET, MAST, ASDEX Upgrade, JT-60U, and NSTX. A high-pass filter enables identification of the noise component with Gaussian-like statistics that shares certain common characteristics in all devices considered. A conservative prediction is made for ITER plasma operation of the high-frequency noise component of the sensor signals, to be used for resistive wall mode feedback stabilization, based on the multimachine database. The predicted root-mean-square n = 1 (n is the toroidal mode number)more » noise level is 10 4 to 10 5 G/s for the voltage signal, and 0.1 to 1 G for the perturbed magnetic field signal. The lower cutoff frequency of the Gaussian pickup noise scales linearly with the sampling frequency, with a scaling coefficient of about 0.1. As a result, these basic noise characteristics should be useful for the modeling-based design of the feedback control system for the resistive wall mode in ITER.« less

  12. Multimachine data–based prediction of high-frequency sensor signal noise for resistive wall mode control in ITER

    DOE PAGES

    Liu, Yueqiang; Sabbagh, S. A.; Chapman, I. T.; ...

    2017-03-27

    The high-frequency noise measured by magnetic sensors, at levels above the typical frequency of resistive wall modes, is analyzed across a range of present tokamak devices including DIII-D, JET, MAST, ASDEX Upgrade, JT-60U, and NSTX. A high-pass filter enables identification of the noise component with Gaussian-like statistics that shares certain common characteristics in all devices considered. A conservative prediction is made for ITER plasma operation of the high-frequency noise component of the sensor signals, to be used for resistive wall mode feedback stabilization, based on the multimachine database. The predicted root-mean-square n = 1 (n is the toroidal mode number)more » noise level is 10 4 to 10 5 G/s for the voltage signal, and 0.1 to 1 G for the perturbed magnetic field signal. The lower cutoff frequency of the Gaussian pickup noise scales linearly with the sampling frequency, with a scaling coefficient of about 0.1. As a result, these basic noise characteristics should be useful for the modeling-based design of the feedback control system for the resistive wall mode in ITER.« less

  13. Chemical Sensor Array Response Modeling Using Quantitative Structure-Activity Relationships Technique

    NASA Astrophysics Data System (ADS)

    Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.

    We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.

  14. County-by-County Financial and Staffing I-M-P-A-C-T. FY 1994-95 Basic Education Program.

    ERIC Educational Resources Information Center

    North Carolina State Dept. of Public Instruction, Raleigh.

    This publication provides the basic statistics needed to illustrate the impact of North Carolina's Basic Education Program (BEP), an educational reform effort begun in 1985. Over 85% of the positions in the BEP are directly related to teaching and student-related activities. The new BEP programs result in smaller class sizes in kindergartens and…

  15. Poisson, Poisson-gamma and zero-inflated regression models of motor vehicle crashes: balancing statistical fit and theory.

    PubMed

    Lord, Dominique; Washington, Simon P; Ivan, John N

    2005-01-01

    There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states-perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of "excess" zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to "excess" zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed-and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros.

  16. Phase Transitions in Combinatorial Optimization Problems: Basics, Algorithms and Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Weigt, Martin

    2005-10-01

    A concise, comprehensive introduction to the topic of statistical physics of combinatorial optimization, bringing together theoretical concepts and algorithms from computer science with analytical methods from physics. The result bridges the gap between statistical physics and combinatorial optimization, investigating problems taken from theoretical computing, such as the vertex-cover problem, with the concepts and methods of theoretical physics. The authors cover rapid developments and analytical methods that are both extremely complex and spread by word-of-mouth, providing all the necessary basics in required detail. Throughout, the algorithms are shown with examples and calculations, while the proofs are given in a way suitable for graduate students, post-docs, and researchers. Ideal for newcomers to this young, multidisciplinary field.

  17. Financial Statistics. Higher Education General Information Survey (HEGIS) [machine-readable data file].

    ERIC Educational Resources Information Center

    Center for Education Statistics (ED/OERI), Washington, DC.

    The Financial Statistics machine-readable data file (MRDF) is a subfile of the larger Higher Education General Information Survey (HEGIS). It contains basic financial statistics for over 3,000 institutions of higher education in the United States and its territories. The data are arranged sequentially by institution, with institutional…

  18. The Greyhound Strike: Using a Labor Dispute to Teach Descriptive Statistics.

    ERIC Educational Resources Information Center

    Shatz, Mark A.

    1985-01-01

    A simulation exercise of a labor-management dispute is used to teach psychology students some of the basics of descriptive statistics. Using comparable data sets generated by the instructor, students work in small groups to develop a statistical presentation that supports their particular position in the dispute. (Author/RM)

  19. Fundamentals of Counting Statistics in Digital PCR: I Just Measured Two Target Copies-What Does It Mean?

    PubMed

    Tzonev, Svilen

    2018-01-01

    Current commercially available digital PCR (dPCR) systems and assays are capable of detecting individual target molecules with considerable reliability. As tests are developed and validated for use on clinical samples, the need to understand and develop robust statistical analysis routines increases. This chapter covers the fundamental processes and limitations of detecting and reporting on single molecule detection. We cover the basics of quantification of targets and sources of imprecision. We describe the basic test concepts: sensitivity, specificity, limit of blank, limit of detection, and limit of quantification in the context of dPCR. We provide basic guidelines how to determine those, how to choose and interpret the operating point, and what factors may influence overall test performance in practice.

  20. α -induced reactions on 115In: Cross section measurements and statistical model analysis

    NASA Astrophysics Data System (ADS)

    Kiss, G. G.; Szücs, T.; Mohr, P.; Török, Zs.; Huszánk, R.; Gyürky, Gy.; Fülöp, Zs.

    2018-05-01

    Background: α -nucleus optical potentials are basic ingredients of statistical model calculations used in nucleosynthesis simulations. While the nucleon+nucleus optical potential is fairly well known, for the α +nucleus optical potential several different parameter sets exist and large deviations, reaching sometimes even an order of magnitude, are found between the cross section predictions calculated using different parameter sets. Purpose: A measurement of the radiative α -capture and the α -induced reaction cross sections on the nucleus 115In at low energies allows a stringent test of statistical model predictions. Since experimental data are scarce in this mass region, this measurement can be an important input to test the global applicability of α +nucleus optical model potentials and further ingredients of the statistical model. Methods: The reaction cross sections were measured by means of the activation method. The produced activities were determined by off-line detection of the γ rays and characteristic x rays emitted during the electron capture decay of the produced Sb isotopes. The 115In(α ,γ )119Sb and 115In(α ,n )Sb118m reaction cross sections were measured between Ec .m .=8.83 and 15.58 MeV, and the 115In(α ,n )Sb118g reaction was studied between Ec .m .=11.10 and 15.58 MeV. The theoretical analysis was performed within the statistical model. Results: The simultaneous measurement of the (α ,γ ) and (α ,n ) cross sections allowed us to determine a best-fit combination of all parameters for the statistical model. The α +nucleus optical potential is identified as the most important input for the statistical model. The best fit is obtained for the new Atomki-V1 potential, and good reproduction of the experimental data is also achieved for the first version of the Demetriou potentials and the simple McFadden-Satchler potential. The nucleon optical potential, the γ -ray strength function, and the level density parametrization are also constrained by the data although there is no unique best-fit combination. Conclusions: The best-fit calculations allow us to extrapolate the low-energy (α ,γ ) cross section of 115In to the astrophysical Gamow window with reasonable uncertainties. However, still further improvements of the α -nucleus potential are required for a global description of elastic (α ,α ) scattering and α -induced reactions in a wide range of masses and energies.

  1. Testing different brain metastasis grading systems in stereotactic radiosurgery: Radiation Therapy Oncology Group's RPA, SIR, BSBM, GPA, and modified RPA.

    PubMed

    Serizawa, Toru; Higuchi, Yoshinori; Nagano, Osamu; Hirai, Tatsuo; Ono, Junichi; Saeki, Naokatsu; Miyakawa, Akifumi

    2012-12-01

    The authors conducted validity testing of the 5 major reported indices for radiosurgically treated brain metastases- the original Radiation Therapy Oncology Group's Recursive Partitioning Analysis (RPA), the Score Index for Radiosurgery in Brain Metastases (SIR), the Basic Score for Brain Metastases (BSBM), the Graded Prognostic Assessment (GPA), and the subclassification of RPA Class II proposed by Yamamoto-in nearly 2500 cases treated with Gamma Knife surgery (GKS), focusing on the preservation of neurological function as well as the traditional endpoint of overall survival. The authors analyzed data from 2445 cases treated with GKS by the first author (T.S.), the primary surgeon. The patient group consisted of 1716 patients treated between January 1998 and March 2008 (the Chiba series) and 729 patients treated between April 2008 and December 2011 (the Tokyo series). The interval from the date of GKS until the date of the patient's death (overall survival) and impaired activities of daily living (qualitative survival) were calculated using the Kaplan-Meier method, while the absolute risk for two adjacent classes of each grading system and both hazard ratios and 95% confidence intervals were estimated using the Cox proportional hazards model. For overall survival, there were highly statistically significant differences between each two adjacent patient groups characterized by class or score (all p values < 0.001), except for GPA Scores 3.5-4.0 and 3.0. The SIR showed the best statistical results for predicting preservation of neurological function. Although no other grading systems yielded statistically significant differences in qualitative survival, the BSBM and the modified RPA appeared to be better than the original RPA and GPA. The modified RPA subclassification, proposed by Yamamoto, is well balanced in scoring simplicity with respect to case number distribution and statistical results for overall survival. However, a new or revised grading system is necessary for predicting qualitative survival and for selecting the optimal treatment for patients with brain metastasis treated by GKS.

  2. Statistical methods for the analysis of climate extremes

    NASA Astrophysics Data System (ADS)

    Naveau, Philippe; Nogaj, Marta; Ammann, Caspar; Yiou, Pascal; Cooley, Daniel; Jomelli, Vincent

    2005-08-01

    Currently there is an increasing research activity in the area of climate extremes because they represent a key manifestation of non-linear systems and an enormous impact on economic and social human activities. Our understanding of the mean behavior of climate and its 'normal' variability has been improving significantly during the last decades. In comparison, climate extreme events have been hard to study and even harder to predict because they are, by definition, rare and obey different statistical laws than averages. In this context, the motivation for this paper is twofold. Firstly, we recall the basic principles of Extreme Value Theory that is used on a regular basis in finance and hydrology, but it still does not have the same success in climate studies. More precisely, the theoretical distributions of maxima and large peaks are recalled. The parameters of such distributions are estimated with the maximum likelihood estimation procedure that offers the flexibility to take into account explanatory variables in our analysis. Secondly, we detail three case-studies to show that this theory can provide a solid statistical foundation, specially when assessing the uncertainty associated with extreme events in a wide range of applications linked to the study of our climate. To cite this article: P. Naveau et al., C. R. Geoscience 337 (2005).

  3. Random noise effects in pulse-mode digital multilayer neural networks.

    PubMed

    Kim, Y C; Shanblatt, M A

    1995-01-01

    A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.

  4. Nurses' foot care activities in home health care.

    PubMed

    Stolt, Minna; Suhonen, Riitta; Puukka, Pauli; Viitanen, Matti; Voutilainen, Päivi; Leino-Kilpi, Helena

    2013-01-01

    This study described the basic foot care activities performed by nurses and factors associated with these in the home care of older people. Data were collected from nurses (n=322) working in nine public home care agencies in Finland using the Nurses' Foot Care Activities Questionnaire (NFAQ). Data were analyzed statistically using descriptive statistics and multivariate liner models. Although some of the basic foot care activities of nurses reported using were outdated, the majority of foot care activities were consistent with recommendations in foot care literature. Longer working experience, referring patients with foot problems to a podiatrist and physiotherapist, and patient education in wart and nail care were associated with a high score for adequate foot care activities. Continuing education should focus on updating basic foot care activities and increasing the use of evidence-based foot care methods. Also, geriatric nursing research should focus in intervention research to improve the use of evidence-based basic foot care activities. Copyright © 2013 Mosby, Inc. All rights reserved.

  5. Monte Carlo investigation of thrust imbalance of solid rocket motor pairs

    NASA Technical Reports Server (NTRS)

    Sforzini, R. H.; Foster, W. A., Jr.

    1976-01-01

    The Monte Carlo method of statistical analysis is used to investigate the theoretical thrust imbalance of pairs of solid rocket motors (SRMs) firing in parallel. Sets of the significant variables are selected using a random sampling technique and the imbalance calculated for a large number of motor pairs using a simplified, but comprehensive, model of the internal ballistics. The treatment of burning surface geometry allows for the variations in the ovality and alignment of the motor case and mandrel as well as those arising from differences in the basic size dimensions and propellant properties. The analysis is used to predict the thrust-time characteristics of 130 randomly selected pairs of Titan IIIC SRMs. A statistical comparison of the results with test data for 20 pairs shows the theory underpredicts the standard deviation in maximum thrust imbalance by 20% with variability in burning times matched within 2%. The range in thrust imbalance of Space Shuttle type SRM pairs is also estimated using applicable tolerances and variabilities and a correction factor based on the Titan IIIC analysis.

  6. StegoWall: blind statistical detection of hidden data

    NASA Astrophysics Data System (ADS)

    Voloshynovskiy, Sviatoslav V.; Herrigel, Alexander; Rytsar, Yuri B.; Pun, Thierry

    2002-04-01

    Novel functional possibilities, provided by recent data hiding technologies, carry out the danger of uncontrolled (unauthorized) and unlimited information exchange that might be used by people with unfriendly interests. The multimedia industry as well as the research community recognize the urgent necessity for network security and copyright protection, or rather the lack of adequate law for digital multimedia protection. This paper advocates the need for detecting hidden data in digital and analog media as well as in electronic transmissions, and for attempting to identify the underlying hidden data. Solving this problem calls for the development of an architecture for blind stochastic hidden data detection in order to prevent unauthorized data exchange. The proposed architecture is called StegoWall; its key aspects are the solid investigation, the deep understanding, and the prediction of possible tendencies in the development of advanced data hiding technologies. The basic idea of our complex approach is to exploit all information about hidden data statistics to perform its detection based on a stochastic framework. The StegoWall system will be used for four main applications: robust watermarking, secret communications, integrity control and tamper proofing, and internet/network security.

  7. Food Choice Questionnaire (FCQ) revisited. Suggestions for the development of an enhanced general food motivation model.

    PubMed

    Fotopoulos, Christos; Krystallis, Athanasios; Vassallo, Marco; Pagiaslis, Anastasios

    2009-02-01

    Recognising the need for a more statistically robust instrument to investigate general food selection determinants, the research validates and confirms Food Choice Questionnaire (FCQ's) factorial design, develops ad hoc a more robust FCQ version and tests its ability to discriminate between consumer segments in terms of the importance they assign to the FCQ motivational factors. The original FCQ appears to represent a comprehensive and reliable research instrument. However, the empirical data do not support the robustness of its 9-factorial design. On the other hand, segmentation results at the subpopulation level based on the enhanced FCQ version bring about an optimistic message for the FCQ's ability to predict food selection behaviour. The paper concludes that some of the basic components of the original FCQ can be used as a basis for a new general food motivation typology. The development of such a new instrument, with fewer, of higher abstraction FCQ-based dimensions and fewer items per dimension, is a right step forward; yet such a step should be theory-driven, while a rigorous statistical testing across and within population would be necessary.

  8. The Predictive Validity of the Assessment of Basic Learning Abilities versus Parents' Predictions with Children with Autism

    ERIC Educational Resources Information Center

    Murphy, Colleen; Martin, Garry L.; Yu, C. T.

    2014-01-01

    The Assessment of Basic Learning Abilities (ABLA) is an empirically validated clinical tool for assessing the learning ability of persons with intellectual disabilities and children with autism. An ABLA tester uses standardized prompting and reinforcement procedures to attempt to teach, individually, each of six tasks, called levels, to a testee,…

  9. Predictive Role of Grit and Basic Psychological Needs Satisfaction on Subjective Well-Being for Young Adults

    ERIC Educational Resources Information Center

    Akbag, Müge; Ümmet, Durmus

    2017-01-01

    In this research, it is aimed to investigate the predictive role of grit as a personality trait and basic psychological needs satisfaction on subjective well-being among young adults. Participants of this research are 348 voluntary young adults who are final year undergraduate students in the government universities of Istanbul city, Turkey, as…

  10. The Role of Basic Needs Fulfillment in Prediction of Subjective Well-Being among University Students

    ERIC Educational Resources Information Center

    Turkdogan, Turgut; Duru, Erdinc

    2012-01-01

    The aim of this study is to examine the role of fulfillment level of university students' basic needs in predicting the level of their subjective well being. The participants were 627 students (56% female, 44% male) attending different faculties of Pamukkale University. In this study, subjective well being was measured with Life Satisfaction Scale…

  11. A weighted generalized score statistic for comparison of predictive values of diagnostic tests.

    PubMed

    Kosinski, Andrzej S

    2013-03-15

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations that are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we presented, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic that incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, always reduces to the score statistic in the independent samples situation, and preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe that the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the WGS test statistic in a general GEE setting. Copyright © 2012 John Wiley & Sons, Ltd.

  12. A weighted generalized score statistic for comparison of predictive values of diagnostic tests

    PubMed Central

    Kosinski, Andrzej S.

    2013-01-01

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations which are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we present, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic which incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, it always reduces to the score statistic in the independent samples situation, and it preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the weighted generalized score test statistic in a general GEE setting. PMID:22912343

  13. Assessment tools for unrecognized myocardial infarction: a cross-sectional analysis of the REasons for geographic and racial differences in stroke population

    PubMed Central

    2013-01-01

    Background Routine electrocardiograms (ECGs) are not recommended for asymptomatic patients because the potential harms are thought to outweigh any benefits. Assessment tools to identify high risk individuals may improve the harm versus benefit profile of screening ECGs. In particular, people with unrecognized myocardial infarction (UMI) have elevated risk for cardiovascular events and death. Methods Using logistic regression, we developed a basic assessment tool among 16,653 participants in the REasons for Geographic and Racial Differences in Stroke (REGARDS) study using demographics, self-reported medical history, blood pressure, and body mass index and an expanded assessment tool using information on 51 potential variables. UMI was defined as electrocardiogram evidence of myocardial infarction without a self-reported history (n = 740). Results The basic assessment tool had a c-statistic of 0.638 (95% confidence interval 0.617 - 0.659) and included age, race, smoking status, body mass index, systolic blood pressure, and self-reported history of transient ischemic attack, deep vein thrombosis, falls, diabetes, and hypertension. A predicted probability of UMI > 3% provided a sensitivity of 80% and a specificity of 30%. The expanded assessment tool had a c-statistic of 0.654 (95% confidence interval 0.634-0.674). Because of the poor performance of these assessment tools, external validation was not pursued. Conclusions Despite examining a large number of potential correlates of UMI, the assessment tools did not provide a high level of discrimination. These data suggest defining groups with high prevalence of UMI for targeted screening will be difficult. PMID:23530553

  14. Revealing spatio-temporal patterns of rabies spread among various categories of animals in the Republic of Kazakhstan, 2010-2013.

    PubMed

    Abdrakhmanov, Sarsenbay K; Beisembayev, Kanatzhan K; Кorennoy, Fedor I; Yessembekova, Gulzhan N; Кushubaev, Dosym B; Кadyrov, Ablaikhan S

    2016-05-31

    This study estimated the basic reproductive ratio of rabies at the population level in wild animals (foxes), farm animals (cattle, camels, horses, sheep) and what we classified as domestic animals (cats, dogs) in the Republic of Kazakhstan (RK). It also aimed at forecasting the possible number of new outbreaks in case of emergence of the disease in new territories. We considered cases of rabies in animals in RK from 2010 to 2013, recorded by regional veterinary services. Statistically significant space-time clusters of outbreaks in three subpopulations were detected by means of Kulldorff Scan statistics. Theoretical curves were then fitted to epidemiological data within each cluster assuming exponential initial growth, which was followed up by calculation of the basic reproductive ratio R0. For farm animals, the value of R0 was 1.62 (1.11-2.26) and for wild animals 1.84 (1.08- 3.13), while it was close to 1 for domestic animals. Using the values obtained, an initial phase of possible epidemic was simulated in order to predict the expected number of secondary cases if the disease were introduced into a new area. The possible number of new cases for 20 weeks was estimated at 5 (1-16) for farm animals, 17 (1-113) for wild animals and about 1 in the category of domestic animals. These results have been used to produce set of recommendations for organising of preventive and contra-epizootic measures against rabies expected to be applied by state veterinarian services.

  15. Novel Biomarkers to Improve the Prediction of Cardiovascular Event Risk in Type 2 Diabetes Mellitus.

    PubMed

    van der Leeuw, Joep; Beulens, Joline W J; van Dieren, Susan; Schalkwijk, Casper G; Glatz, Jan F C; Hofker, Marten H; Verschuren, W M Monique; Boer, Jolanda M A; van der Graaf, Yolanda; Visseren, Frank L J; Peelen, Linda M; van der Schouw, Yvonne T

    2016-05-31

    We evaluated the ability of 23 novel biomarkers representing several pathophysiological pathways to improve the prediction of cardiovascular event (CVE) risk in patients with type 2 diabetes mellitus beyond traditional risk factors. We used data from 1002 patients with type 2 diabetes mellitus from the Second Manifestations of ARTertial disease (SMART) study and 288 patients from the European Prospective Investigation into Cancer and Nutrition-NL (EPIC-NL). The associations of 23 biomarkers (adiponectin, C-reactive protein, epidermal-type fatty acid binding protein, heart-type fatty acid binding protein, basic fibroblast growth factor, soluble FMS-like tyrosine kinase-1, soluble intercellular adhesion molecule-1 and -3, matrix metalloproteinase [MMP]-1, MMP-3, MMP-9, N-terminal prohormone of B-type natriuretic peptide, osteopontin, osteonectin, osteocalcin, placental growth factor, serum amyloid A, E-selectin, P-selectin, tissue inhibitor of MMP-1, thrombomodulin, soluble vascular cell adhesion molecule-1, and vascular endothelial growth factor) with CVE risk were evaluated by using Cox proportional hazards analysis adjusting for traditional risk factors. The incremental predictive performance was assessed with use of the c-statistic and net reclassification index (NRI; continuous and based on 10-year risk strata 0-10%, 10-20%, 20-30%, >30%). A multimarker model was constructed comprising those biomarkers that improved predictive performance in both cohorts. N-terminal prohormone of B-type natriuretic peptide, osteopontin, and MMP-3 were the only biomarkers significantly associated with an increased risk of CVE and improved predictive performance in both cohorts. In SMART, the combination of these biomarkers increased the c-statistic with 0.03 (95% CI 0.01-0.05), and the continuous NRI was 0.37 (95% CI 0.21-0.52). In EPIC-NL, the multimarker model increased the c-statistic with 0.03 (95% CI 0.00-0.03), and the continuous NRI was 0.44 (95% CI 0.23-0.66). Based on risk strata, the NRI was 0.12 (95% CI 0.03-0.21) in SMART and 0.07 (95% CI -0.04-0.17) in EPIC-NL. Of the 23 evaluated biomarkers from different pathophysiological pathways, N-terminal prohormone of B-type natriuretic peptide, osteopontin, MMP-3, and their combination improved CVE risk prediction in 2 separate cohorts of patients with type 2 diabetes mellitus beyond traditional risk factors. However, the number of patients reclassified to a different risk stratum was limited. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  16. Fracture mechanics concepts in reliability analysis of monolithic ceramics

    NASA Technical Reports Server (NTRS)

    Manderscheid, Jane M.; Gyekenyesi, John P.

    1987-01-01

    Basic design concepts for high-performance, monolithic ceramic structural components are addressed. The design of brittle ceramics differs from that of ductile metals because of the inability of ceramic materials to redistribute high local stresses caused by inherent flaws. Random flaw size and orientation requires that a probabilistic analysis be performed in order to determine component reliability. The current trend in probabilistic analysis is to combine linear elastic fracture mechanics concepts with the two parameter Weibull distribution function to predict component reliability under multiaxial stress states. Nondestructive evaluation supports this analytical effort by supplying data during verification testing. It can also help to determine statistical parameters which describe the material strength variation, in particular the material threshold strength (the third Weibull parameter), which in the past was often taken as zero for simplicity.

  17. Computational prediction of probabilistic ignition threshold of pressed granular Octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) under shock loading

    NASA Astrophysics Data System (ADS)

    Kim, Seokpum; Miller, Christopher; Horie, Yasuyuki; Molek, Christopher; Welle, Eric; Zhou, Min

    2016-09-01

    The probabilistic ignition thresholds of pressed granular Octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine explosives with average grain sizes between 70 μm and 220 μm are computationally predicted. The prediction uses material microstructure and basic constituent properties and does not involve curve fitting with respect to or prior knowledge of the attributes being predicted. The specific thresholds predicted are James-type relations between the energy flux and energy fluence for given probabilities of ignition. Statistically similar microstructure sample sets are computationally generated and used based on the features of micrographs of materials used in actual experiments. The predicted thresholds are in general agreement with measurements from shock experiments in terms of trends. In particular, it is found that grain size significantly affects the ignition sensitivity of the materials, with smaller sizes leading to lower energy thresholds required for ignition. For example, 50% ignition threshold of the material with an average grain size of 220 μm is approximately 1.4-1.6 times that of the material with an average grain size of 70 μm in terms of energy fluence. The simulations account for the controlled loading of thin-flyer shock experiments with flyer velocities between 1.5 and 4.0 km/s, constituent elasto-viscoplasticity, fracture, post-fracture contact and friction along interfaces, bulk inelastic heating, interfacial frictional heating, and heat conduction. The constitutive behavior of the materials is described using a finite deformation elasto-viscoplastic formulation and the Birch-Murnaghan equation of state. The ignition thresholds are determined via an explicit analysis of the size and temperature states of hotspots in the materials and a hotspot-based ignition criterion. The overall ignition threshold analysis and the microstructure-level hotspot analysis also lead to the definition of a macroscopic ignition parameter (J) and a microscopic ignition risk parameter (R) which are statistically related. The relationships between these parameters are established and delineated.

  18. Intraindividual variability in basic reaction time predicts middle-aged and older pilots' flight simulator performance.

    PubMed

    Kennedy, Quinn; Taylor, Joy; Heraldez, Daniel; Noda, Art; Lazzeroni, Laura C; Yesavage, Jerome

    2013-07-01

    Intraindividual variability (IIV) is negatively associated with cognitive test performance and is positively associated with age and some neurological disorders. We aimed to extend these findings to a real-world task, flight simulator performance. We hypothesized that IIV predicts poorer initial flight performance and increased rate of decline in performance among middle-aged and older pilots. Two-hundred and thirty-six pilots (40-69 years) completed annual assessments comprising a cognitive battery and two 75-min simulated flights in a flight simulator. Basic and complex IIV composite variables were created from measures of basic reaction time and shifting and divided attention tasks. Flight simulator performance was characterized by an overall summary score and scores on communication, emergencies, approach, and traffic avoidance components. Although basic IIV did not predict rate of decline in flight performance, it had a negative association with initial performance for most flight measures. After taking into account processing speed, basic IIV explained an additional 8%-12% of the negative age effect on initial flight performance. IIV plays an important role in real-world tasks and is another aspect of cognition that underlies age-related differences in cognitive performance.

  19. Intraindividual Variability in Basic Reaction Time Predicts Middle-Aged and Older Pilots’ Flight Simulator Performance

    PubMed Central

    2013-01-01

    Objectives. Intraindividual variability (IIV) is negatively associated with cognitive test performance and is positively associated with age and some neurological disorders. We aimed to extend these findings to a real-world task, flight simulator performance. We hypothesized that IIV predicts poorer initial flight performance and increased rate of decline in performance among middle-aged and older pilots. Method. Two-hundred and thirty-six pilots (40–69 years) completed annual assessments comprising a cognitive battery and two 75-min simulated flights in a flight simulator. Basic and complex IIV composite variables were created from measures of basic reaction time and shifting and divided attention tasks. Flight simulator performance was characterized by an overall summary score and scores on communication, emergencies, approach, and traffic avoidance components. Results. Although basic IIV did not predict rate of decline in flight performance, it had a negative association with initial performance for most flight measures. After taking into account processing speed, basic IIV explained an additional 8%–12% of the negative age effect on initial flight performance. Discussion. IIV plays an important role in real-world tasks and is another aspect of cognition that underlies age-related differences in cognitive performance. PMID:23052365

  20. Developing Competency of Teachers in Basic Education Schools

    ERIC Educational Resources Information Center

    Yuayai, Rerngrit; Chansirisira, Pacharawit; Numnaphol, Kochaporn

    2015-01-01

    This study aims to develop competency of teachers in basic education schools. The research instruments included the semi-structured in-depth interview form, questionnaire, program developing competency, and evaluation competency form. The statistics used for data analysis were percentage, mean, and standard deviation. The research found that…

  1. Personal Docente del Nivel Primario. Series Estadisticas Basicas, Nivel Educativo: Cordoba (Teaching Personnel in Primary Schools. Basic Statistics Series , Level of Education: Cordoba).

    ERIC Educational Resources Information Center

    Ministerio de Educacion Nacional, Bogota (Colombia). Instituto Colombiano de Pedagogia.

    This document provides statistical data on the distribution and education of teaching personnel working the elementary schools of Cordoba, Colombia, between 1958 and 1967. The statistics cover the number of men and women, public and private schools, urban and rural location, and the amount of education of the teachers. For overall statistics in…

  2. Personal Docente des Nivel Primario. Series Estadisticas Basicas, Nivel Educativo: Narino (Teaching Personnel in Primary Schools. Basic Statistics Series, Level of Education: Narino).

    ERIC Educational Resources Information Center

    Ministerio de Educacion Nacional, Bogota (Colombia). Instituto Colombiano de Pedagogia.

    This document provides statistical data on the distribution and education of teaching personnel working in the elementary schools of Narino, Colombia, between 1958 and 1967. The statistics cover the number of men and women, public and private schools, urban and rural location, and the amount of education of the teachers. For overall statistics in…

  3. Personal Docente del Nivel Primario. Series Estadisticas Basicas, Nivel Educativo: Cauca (Teaching Personnel in Primary Schools. Basic Statistics Series, Level of Education: Cauca).

    ERIC Educational Resources Information Center

    Ministerio de Educacion Nacional, Bogota (Colombia). Instituto Colombiano de Pedagogia.

    This document provides statistical data on the distribution and education of teaching personnel working in the elementary schools of Cauca, Colombia, between 1958 and 1967. The statistics cover the number of men and women, public and private schools, urban and rural location, and the amount of education of the teachers. For overall statistics in…

  4. Personal Docente del Nivel Primario. Series Estadisticas Basicas, Nivel Educativo: Caldas (Teaching Personnel in Primary Schools. Basic Statistics Series, Level of Education: Caldas).

    ERIC Educational Resources Information Center

    Ministerio de Educacion Nacional, Bogota (Colombia). Instituto Colombiano de Pedagogia.

    This document provides statistical data on the distribution and education of teaching personnel working in the elementary schools of Caldas, Colombia, between 1958 and 1967. The statistics cover the number of men and women, public and private schools, urban and rural location, and the amount of education of the teachers. For overall statistics in…

  5. Personal Docente del Nivel Primario. Series Estadisticas Basicas, Nivel Educativo: Boyaca (Teaching Personnel in Primary Schools. Basic Statistics Series, Level of Education: Boyaca).

    ERIC Educational Resources Information Center

    Ministerio de Educacion Nacional, Bogota (Colombia). Instituto Colombiano de Pedagogia.

    This document provides statistical data on the distribution and education of teaching personnel working in the elementary schools of Boyaca, Colombia, between 1958 and 1967. The statistics cover the number of men and women, public and private schools, urban and rural location, and the amount of education of the teachers. For overall statistics in…

  6. Personal Docente del Nivel Primario. Series Estadisticas Basicas, Nivel Educativo: Huila (Teaching Personnel in Primary Schools. Basic Statistics Series, Level of Education: Huila).

    ERIC Educational Resources Information Center

    Ministerio de Educacion Nacional, Bogota (Colombia). Instituto Colombiano de Pedagogia.

    This document provides statistical data on the distribution and education of teaching personnel working in the elementary schools of Huila, Colombia, between 1958 and 1967. The statistics cover the number of men and women, public and private schools, urban and rural location, and the amount of education of the teachers. For overall statistics in…

  7. Autonomy support, basic psychological needs and well-being in Mexican athletes.

    PubMed

    López-Walle, Jeanette; Balaguer, Isabel; Castillo, Isabel; Tristán, José

    2012-11-01

    Based on Basic Needs Theory, one of the mini-theories of Self-determination Theory (Ryan & Deci, 2002), the present study had two objectives: (a) to test a model in the Mexican sport context based on the following sequence: perceived coach autonomy support, basic psychological needs satisfaction, and psychological well-being, and b) to analyze the mediational effect of the satisfaction of perceived coach autonomy support on indicators of psychological well-being (satisfaction with life and subjective vitality). Six hundred and sixty-nine young Mexican athletes (Boys = 339; Girls = 330; M(age) = 13.95) filled out a questionnaire assessing the study variables. Structural equations analyses revealed that perceived coach autonomy support predicted satisfaction of the basic psychological needs for autonomy, competence, and relatedness. Furthermore, basic need satisfaction predicted subjective vitality and satisfaction with life. Autonomy, competence and relatedness partially mediated the path from perceived coach autonomy support to psychological well-being in young Mexican athletes.

  8. Health Resources Statistics; Health Manpower and Health Facilities, 1968. Public Health Service Publication No. 1509.

    ERIC Educational Resources Information Center

    National Center for Health Statistics (DHEW/PHS), Hyattsville, MD.

    This report is a part of the program of the National Center for Health Statistics to provide current statistics as baseline data for the evaluation, planning, and administration of health programs. Part I presents data concerning the occupational fields: (1) administration, (2) anthropology and sociology, (3) data processing, (4) basic sciences,…

  9. Personal Docente del Nivel Primario. Series Estadisticas Basicas: Colombia (Teaching Personnel in Primary Schools. Basic Statistics Series: Colombia).

    ERIC Educational Resources Information Center

    Ministerio de Educacion Nacional, Bogota (Colombia). Instituto Colombiano de Pedagogia.

    This document provides statistical data on the distribution and education of teacher personnel working in Colombian elementary schools between 1940 and 1968. The statistics cover the number of men and women, public and private schools, urban and rural location, and the amount of education of teachers. (VM)

  10. Explorations in Statistics: Standard Deviations and Standard Errors

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2008-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This series in "Advances in Physiology Education" provides an opportunity to do just that: we will investigate basic concepts in statistics using the free software package R. Because this series uses R solely as a vehicle…

  11. The Precision-Power-Gradient Theory for Teaching Basic Research Statistical Tools to Graduate Students.

    ERIC Educational Resources Information Center

    Cassel, Russell N.

    This paper relates educational and psychological statistics to certain "Research Statistical Tools" (RSTs) necessary to accomplish and understand general research in the behavioral sciences. Emphasis is placed on acquiring an effective understanding of the RSTs and to this end they are are ordered to a continuum scale in terms of individual…

  12. Estimates of School Statistics, 1971-72.

    ERIC Educational Resources Information Center

    Flanigan, Jean M.

    This report presents public school statistics for the 50 States, the District of Columbia, and the regions and outlying areas of the United States. The text presents national data for each of the past 10 years and defines the basic series of statistics. Tables present the revised estimates by State and region for 1970-71 and the preliminary…

  13. Combining statistical inference and decisions in ecology

    USGS Publications Warehouse

    Williams, Perry J.; Hooten, Mevin B.

    2016-01-01

    Statistical decision theory (SDT) is a sub-field of decision theory that formally incorporates statistical investigation into a decision-theoretic framework to account for uncertainties in a decision problem. SDT provides a unifying analysis of three types of information: statistical results from a data set, knowledge of the consequences of potential choices (i.e., loss), and prior beliefs about a system. SDT links the theoretical development of a large body of statistical methods including point estimation, hypothesis testing, and confidence interval estimation. The theory and application of SDT have mainly been developed and published in the fields of mathematics, statistics, operations research, and other decision sciences, but have had limited exposure in ecology. Thus, we provide an introduction to SDT for ecologists and describe its utility for linking the conventionally separate tasks of statistical investigation and decision making in a single framework. We describe the basic framework of both Bayesian and frequentist SDT, its traditional use in statistics, and discuss its application to decision problems that occur in ecology. We demonstrate SDT with two types of decisions: Bayesian point estimation, and an applied management problem of selecting a prescribed fire rotation for managing a grassland bird species. Central to SDT, and decision theory in general, are loss functions. Thus, we also provide basic guidance and references for constructing loss functions for an SDT problem.

  14. Empirically derived personality subtyping for predicting clinical symptoms and treatment response in bulimia nervosa.

    PubMed

    Haynos, Ann F; Pearson, Carolyn M; Utzinger, Linsey M; Wonderlich, Stephen A; Crosby, Ross D; Mitchell, James E; Crow, Scott J; Peterson, Carol B

    2017-05-01

    Evidence suggests that eating disorder subtypes reflecting under-controlled, over-controlled, and low psychopathology personality traits constitute reliable phenotypes that differentiate treatment response. This study is the first to use statistical analyses to identify these subtypes within treatment-seeking individuals with bulimia nervosa (BN) and to use these statistically derived clusters to predict clinical outcomes. Using variables from the Dimensional Assessment of Personality Pathology-Basic Questionnaire, K-means cluster analyses identified under-controlled, over-controlled, and low psychopathology subtypes within BN patients (n = 80) enrolled in a treatment trial. Generalized linear models examined the impact of personality subtypes on Eating Disorder Examination global score, binge eating frequency, and purging frequency cross-sectionally at baseline and longitudinally at end of treatment (EOT) and follow-up. In the longitudinal models, secondary analyses were conducted to examine personality subtype as a potential moderator of response to Cognitive Behavioral Therapy-Enhanced (CBT-E) or Integrative Cognitive-Affective Therapy for BN (ICAT-BN). There were no baseline clinical differences between groups. In the longitudinal models, personality subtype predicted binge eating (p = 0.03) and purging (p = 0.01) frequency at EOT and binge eating frequency at follow-up (p = 0.045). The over-controlled group demonstrated the best outcomes on these variables. In secondary analyses, there was a treatment by subtype interaction for purging at follow-up (p = 0.04), which indicated a superiority of CBT-E over ICAT-BN for reducing purging among the over-controlled group. Empirically derived personality subtyping appears to be a valid classification system with potential to guide eating disorder treatment decisions. © 2016 Wiley Periodicals, Inc.(Int J Eat Disord 2017; 50:506-514). © 2016 Wiley Periodicals, Inc.

  15. Comparative study of the Aristotle Comprehensive Complexity and the Risk Adjustment in Congenital Heart Surgery scores.

    PubMed

    Bojan, Mirela; Gerelli, Sébastien; Gioanni, Simone; Pouard, Philippe; Vouhé, Pascal

    2011-09-01

    The Aristotle Comprehensive Complexity (ACC) and the Risk Adjustment in Congenital Heart Surgery (RACHS-1) scores have been proposed for complexity adjustment in the analysis of outcome after congenital heart surgery. Previous studies found RACHS-1 to be a better predictor of outcome than the Aristotle Basic Complexity score. We compared the ability to predict operative mortality and morbidity between ACC, the latest update of the Aristotle method and RACHS-1. Morbidity was assessed by length of intensive care unit stay. We retrospectively enrolled patients undergoing congenital heart surgery. We modeled each score as a continuous variable, mortality as a binary variable, and length of stay as a censored variable. We compared performance between mortality and morbidity models using likelihood ratio tests for nested models and paired concordance statistics. Among all 1,384 patients enrolled, 30-day mortality rate was 3.5% and median length of intensive care unit stay was 3 days. Both scores strongly related to mortality, but ACC made better prediction than RACHS-1; c-indexes 0.87 (0.84, 0.91) vs 0.75 (0.65, 0.82). Both scores related to overall length of stay only during the first postoperative week, but ACC made better predictions than RACHS-1; U statistic=0.22, p<0.001. No significant difference was noted after adjusting RACHS-1 models on age, prematurity, and major extracardiac abnormalities. The ACC was a better predictor of operative mortality and length of intensive care unit stay than RACHS-1. In order to achieve similar performance, regression models including RACHS-1 need to be further adjusted on age, prematurity, and major extracardiac abnormalities. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  16. Pitfalls in statistical landslide susceptibility modelling

    NASA Astrophysics Data System (ADS)

    Schröder, Boris; Vorpahl, Peter; Märker, Michael; Elsenbeer, Helmut

    2010-05-01

    The use of statistical methods is a well-established approach to predict landslide occurrence probabilities and to assess landslide susceptibility. This is achieved by applying statistical methods relating historical landslide inventories to topographic indices as predictor variables. In our contribution, we compare several new and powerful methods developed in machine learning and well-established in landscape ecology and macroecology for predicting the distribution of shallow landslides in tropical mountain rainforests in southern Ecuador (among others: boosted regression trees, multivariate adaptive regression splines, maximum entropy). Although these methods are powerful, we think it is necessary to follow a basic set of guidelines to avoid some pitfalls regarding data sampling, predictor selection, and model quality assessment, especially if a comparison of different models is contemplated. We therefore suggest to apply a novel toolbox to evaluate approaches to the statistical modelling of landslide susceptibility. Additionally, we propose some methods to open the "black box" as an inherent part of machine learning methods in order to achieve further explanatory insights into preparatory factors that control landslides. Sampling of training data should be guided by hypotheses regarding processes that lead to slope failure taking into account their respective spatial scales. This approach leads to the selection of a set of candidate predictor variables considered on adequate spatial scales. This set should be checked for multicollinearity in order to facilitate model response curve interpretation. Model quality assesses how well a model is able to reproduce independent observations of its response variable. This includes criteria to evaluate different aspects of model performance, i.e. model discrimination, model calibration, and model refinement. In order to assess a possible violation of the assumption of independency in the training samples or a possible lack of explanatory information in the chosen set of predictor variables, the model residuals need to be checked for spatial auto¬correlation. Therefore, we calculate spline correlograms. In addition to this, we investigate partial dependency plots and bivariate interactions plots considering possible interactions between predictors to improve model interpretation. Aiming at presenting this toolbox for model quality assessment, we investigate the influence of strategies in the construction of training datasets for statistical models on model quality.

  17. Cognition of and Demand for Education and Teaching in Medical Statistics in China: A Systematic Review and Meta-Analysis

    PubMed Central

    Li, Gaoming; Yi, Dali; Wu, Xiaojiao; Liu, Xiaoyu; Zhang, Yanqi; Liu, Ling; Yi, Dong

    2015-01-01

    Background Although a substantial number of studies focus on the teaching and application of medical statistics in China, few studies comprehensively evaluate the recognition of and demand for medical statistics. In addition, the results of these various studies differ and are insufficiently comprehensive and systematic. Objectives This investigation aimed to evaluate the general cognition of and demand for medical statistics by undergraduates, graduates, and medical staff in China. Methods We performed a comprehensive database search related to the cognition of and demand for medical statistics from January 2007 to July 2014 and conducted a meta-analysis of non-controlled studies with sub-group analysis for undergraduates, graduates, and medical staff. Results There are substantial differences with respect to the cognition of theory in medical statistics among undergraduates (73.5%), graduates (60.7%), and medical staff (39.6%). The demand for theory in medical statistics is high among graduates (94.6%), undergraduates (86.1%), and medical staff (88.3%). Regarding specific statistical methods, the cognition of basic statistical methods is higher than of advanced statistical methods. The demand for certain advanced statistical methods, including (but not limited to) multiple analysis of variance (ANOVA), multiple linear regression, and logistic regression, is higher than that for basic statistical methods. The use rates of the Statistical Package for the Social Sciences (SPSS) software and statistical analysis software (SAS) are only 55% and 15%, respectively. Conclusion The overall statistical competence of undergraduates, graduates, and medical staff is insufficient, and their ability to practically apply their statistical knowledge is limited, which constitutes an unsatisfactory state of affairs for medical statistics education. Because the demand for skills in this area is increasing, the need to reform medical statistics education in China has become urgent. PMID:26053876

  18. Cognition of and Demand for Education and Teaching in Medical Statistics in China: A Systematic Review and Meta-Analysis.

    PubMed

    Wu, Yazhou; Zhou, Liang; Li, Gaoming; Yi, Dali; Wu, Xiaojiao; Liu, Xiaoyu; Zhang, Yanqi; Liu, Ling; Yi, Dong

    2015-01-01

    Although a substantial number of studies focus on the teaching and application of medical statistics in China, few studies comprehensively evaluate the recognition of and demand for medical statistics. In addition, the results of these various studies differ and are insufficiently comprehensive and systematic. This investigation aimed to evaluate the general cognition of and demand for medical statistics by undergraduates, graduates, and medical staff in China. We performed a comprehensive database search related to the cognition of and demand for medical statistics from January 2007 to July 2014 and conducted a meta-analysis of non-controlled studies with sub-group analysis for undergraduates, graduates, and medical staff. There are substantial differences with respect to the cognition of theory in medical statistics among undergraduates (73.5%), graduates (60.7%), and medical staff (39.6%). The demand for theory in medical statistics is high among graduates (94.6%), undergraduates (86.1%), and medical staff (88.3%). Regarding specific statistical methods, the cognition of basic statistical methods is higher than of advanced statistical methods. The demand for certain advanced statistical methods, including (but not limited to) multiple analysis of variance (ANOVA), multiple linear regression, and logistic regression, is higher than that for basic statistical methods. The use rates of the Statistical Package for the Social Sciences (SPSS) software and statistical analysis software (SAS) are only 55% and 15%, respectively. The overall statistical competence of undergraduates, graduates, and medical staff is insufficient, and their ability to practically apply their statistical knowledge is limited, which constitutes an unsatisfactory state of affairs for medical statistics education. Because the demand for skills in this area is increasing, the need to reform medical statistics education in China has become urgent.

  19. Basic Pharmaceutical Sciences Examination as a Predictor of Student Performance during Clinical Training.

    ERIC Educational Resources Information Center

    Fassett, William E.; Campbell, William H.

    1984-01-01

    A comparison of Basic Pharmaceutical Sciences Examination (BPSE) results with student performance evaluations in core clerkships, institutional and community externships, didactic and clinical courses, and related basic science coursework revealed the BPSE does not predict student performance during clinical instruction. (MSE)

  20. Senior Computational Scientist | Center for Cancer Research

    Cancer.gov

    The Basic Science Program (BSP) pursues independent, multidisciplinary research in basic and applied molecular biology, immunology, retrovirology, cancer biology, and human genetics. Research efforts and support are an integral part of the Center for Cancer Research (CCR) at the Frederick National Laboratory for Cancer Research (FNLCR). The Cancer & Inflammation Program (CIP), Basic Science Program, HLA Immunogenetics Section, under the leadership of Dr. Mary Carrington, studies the influence of human leukocyte antigens (HLA) and specific KIR/HLA genotypes on risk of and outcomes to infection, cancer, autoimmune disease, and maternal-fetal disease. Recent studies have focused on the impact of HLA gene expression in disease, the molecular mechanism regulating expression levels, and the functional basis for the effect of differential expression on disease outcome. The lab’s further focus is on the genetic basis for resistance/susceptibility to disease conferred by immunogenetic variation. KEY ROLES/RESPONSIBILITIES The Senior Computational Scientist will provide research support to the CIP-BSP-HLA Immunogenetics Section performing bio-statistical design, analysis and reporting of research projects conducted in the lab. This individual will be involved in the implementation of statistical models and data preparation. Successful candidate should have 5 or more years of competent, innovative biostatistics/bioinformatics research experience, beyond doctoral training Considerable experience with statistical software, such as SAS, R and S-Plus Sound knowledge, and demonstrated experience of theoretical and applied statistics Write program code to analyze data using statistical analysis software Contribute to the interpretation and publication of research results

  1. Stata companion.

    PubMed

    Brennan, Jennifer Sousa

    2010-01-01

    This chapter is an introductory reference guide highlighting some of the most common statistical topics, broken down into both command-line syntax and graphical interface point-and-click commands. This chapter serves to supplement more formal statistics lessons and expedite using Stata to compute basic analyses.

  2. System analysis for the Huntsville Operational Support Center distributed computer system

    NASA Technical Reports Server (NTRS)

    Ingels, E. M.

    1983-01-01

    A simulation model was developed and programmed in three languages BASIC, PASCAL, and SLAM. Two of the programs are included in this report, the BASIC and the PASCAL language programs. SLAM is not supported by NASA/MSFC facilities and hence was not included. The statistical comparison of simulations of the same HOSC system configurations are in good agreement and are in agreement with the operational statistics of HOSC that were obtained. Three variations of the most recent HOSC configuration was run and some conclusions drawn as to the system performance under these variations.

  3. Quantum Social Science

    NASA Astrophysics Data System (ADS)

    Haven, Emmanuel; Khrennikov, Andrei

    2013-01-01

    Preface; Part I. Physics Concepts in Social Science? A Discussion: 1. Classical, statistical and quantum mechanics: all in one; 2. Econophysics: statistical physics and social science; 3. Quantum social science: a non-mathematical motivation; Part II. Mathematics and Physics Preliminaries: 4. Vector calculus and other mathematical preliminaries; 5. Basic elements of quantum mechanics; 6. Basic elements of Bohmian mechanics; Part III. Quantum Probabilistic Effects in Psychology: Basic Questions and Answers: 7. A brief overview; 8. Interference effects in psychology - an introduction; 9. A quantum-like model of decision making; Part IV. Other Quantum Probabilistic Effects in Economics, Finance and Brain Sciences: 10. Financial/economic theory in crisis; 11. Bohmian mechanics in finance and economics; 12. The Bohm-Vigier Model and path simulation; 13. Other applications to economic/financial theory; 14. The neurophysiological sources of quantum-like processing in the brain; Conclusion; Glossary; Index.

  4. GENASIS Basics: Object-oriented utilitarian functionality for large-scale physics simulations (Version 2)

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2017-05-01

    GenASiS Basics provides Fortran 2003 classes furnishing extensible object-oriented utilitarian functionality for large-scale physics simulations on distributed memory supercomputers. This functionality includes physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. This revision -Version 2 of Basics - makes mostly minor additions to functionality and includes some simplifying name changes.

  5. Basic Facts and Figures about the Educational System in Japan.

    ERIC Educational Resources Information Center

    National Inst. for Educational Research, Tokyo (Japan).

    Tables, charts, and graphs convey supporting data that accompany text on various aspects of the Japanese educational system presented in this booklet. There are seven chapters: (1) Fundamental principles of education; (2) Organization of the educational system; (3) Basic statistics of education; (4) Curricula, textbooks, and instructional aids;…

  6. Stochastic or statistic? Comparing flow duration curve models in ungauged basins and changing climates

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2015-09-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drives of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by a strong wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are strongly favored over statistical models.

  7. Comparing statistical and process-based flow duration curve models in ungauged basins and changing rain regimes

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2016-02-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drivers of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by frequent wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are favored over statistical models.

  8. Performance Data Gathering and Representation from Fixed-Size Statistical Data

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Jin, Haoqiang H.; Schmidt, Melisa A.; Kutler, Paul (Technical Monitor)

    1997-01-01

    The two commonly-used performance data types in the super-computing community, statistics and event traces, are discussed and compared. Statistical data are much more compact but lack the probative power event traces offer. Event traces, on the other hand, are unbounded and can easily fill up the entire file system during program execution. In this paper, we propose an innovative methodology for performance data gathering and representation that offers a middle ground. Two basic ideas are employed: the use of averages to replace recording data for each instance and 'formulae' to represent sequences associated with communication and control flow. The user can trade off tracing overhead, trace data size with data quality incrementally. In other words, the user will be able to limit the amount of trace data collected and, at the same time, carry out some of the analysis event traces offer using space-time views. With the help of a few simple examples, we illustrate the use of these techniques in performance tuning and compare the quality of the traces we collected with event traces. We found that the trace files thus obtained are, indeed, small, bounded and predictable before program execution, and that the quality of the space-time views generated from these statistical data are excellent. Furthermore, experimental results showed that the formulae proposed were able to capture all the sequences associated with 11 of the 15 applications tested. The performance of the formulae can be incrementally improved by allocating more memory at runtime to learn longer sequences.

  9. Environmental statistics and optimal regulation.

    PubMed

    Sivak, David A; Thomson, Matt

    2014-09-01

    Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies--such as constitutive expression or graded response--for regulating protein levels in response to environmental inputs. We propose a general framework-here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient-to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.

  10. Beyond δ : Tailoring marked statistics to reveal modified gravity

    NASA Astrophysics Data System (ADS)

    Valogiannis, Georgios; Bean, Rachel

    2018-01-01

    Models that seek to explain cosmic acceleration through modifications to general relativity (GR) evade stringent Solar System constraints through a restoring, screening mechanism. Down-weighting the high-density, screened regions in favor of the low density, unscreened ones offers the potential to enhance the amount of information carried in such modified gravity models. In this work, we assess the performance of a new "marked" transformation and perform a systematic comparison with the clipping and logarithmic transformations, in the context of Λ CDM and the symmetron and f (R ) modified gravity models. Performance is measured in terms of the fractional boost in the Fisher information and the signal-to-noise ratio (SNR) for these models relative to the statistics derived from the standard density distribution. We find that all three statistics provide improved Fisher boosts over the basic density statistics. The model parameters for the marked and clipped transformation that best enhance signals and the Fisher boosts are determined. We also show that the mark is useful both as a Fourier and real-space transformation; a marked correlation function also enhances the SNR relative to the standard correlation function, and can on mildly nonlinear scales show a significant difference between the Λ CDM and the modified gravity models. Our results demonstrate how a series of simple analytical transformations could dramatically increase the predicted information extracted on deviations from GR, from large-scale surveys, and give the prospect for a much more feasible potential detection.

  11. Flight-Determined, Subsonic, Lateral-Directional Stability and Control Derivatives of the Thrust-Vectoring F-18 High Angle of Attack Research Vehicle (HARV), and Comparisons to the Basic F-18 and Predicted Derivatives

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.; Wang, Kon-Sheng Charles

    1999-01-01

    The subsonic, lateral-directional, stability and control derivatives of the thrust-vectoring F-1 8 High Angle of Attack Research Vehicle (HARV) are extracted from flight data using a maximum likelihood parameter identification technique. State noise is accounted for in the identification formulation and is used to model the uncommanded forcing functions caused by unsteady aerodynamics. Preprogrammed maneuvers provided independent control surface inputs, eliminating problems of identifiability related to correlations between the aircraft controls and states. The HARV derivatives are plotted as functions of angles of attack between 10deg and 70deg and compared to flight estimates from the basic F-18 aircraft and to predictions from ground and wind tunnel tests. Unlike maneuvers of the basic F-18 aircraft, the HARV maneuvers were very precise and repeatable, resulting in tightly clustered estimates with small uncertainty levels. Significant differences were found between flight and prediction; however, some of these differences may be attributed to differences in the range of sideslip or input amplitude over which a given derivative was evaluated, and to differences between the HARV external configuration and that of the basic F-18 aircraft, upon which most of the prediction was based. Some HARV derivative fairings have been adjusted using basic F-18 derivatives (with low uncertainties) to help account for differences in variable ranges and the lack of HARV maneuvers at certain angles of attack.

  12. Universal gestational age effects on cognitive and basic mathematic processing: 2 cohorts in 2 countries.

    PubMed

    Wolke, Dieter; Strauss, Vicky Yu-Chun; Johnson, Samantha; Gilmore, Camilla; Marlow, Neil; Jaekel, Julia

    2015-06-01

    To determine whether general cognitive ability, basic mathematic processing, and mathematic attainment are universally affected by gestation at birth, as well as whether mathematic attainment is more strongly associated with cohort-specific factors such as schooling than basic cognitive and mathematical abilities. The Bavarian Longitudinal Study (BLS, 1289 children, 27-41 weeks gestational age [GA]) was used to estimate effects of GA on IQ, basic mathematic processing, and mathematic attainment. These estimations were used to predict IQ, mathematic processing, and mathematic attainment in the EPICure Study (171 children <26 weeks GA). For children born <34 weeks GA, each lower week decreased IQ and mathematic attainment scores by 2.34 (95% CI: -2.99, -1.70) and 2.76 (95% CI: -3.40, -2.11) points, respectively. There were no differences among children born 34-41 weeks GA. Similarly, for children born <36 weeks GA, mathematic processing scores decreased by 1.77 (95% CI: -2.20, -1.34) points with each lower GA week. The prediction function generated using BLS data accurately predicted the effect of GA on IQ and mathematic processing among EPICure children. However, these children had better attainment than predicted by BLS. Prematurity has adverse effects on basic mathematic processing following birth at all gestations <36 weeks and on IQ and mathematic attainment <34 weeks GA. The ability to predict IQ and mathematic processing scores from one cohort to another among children cared for in different eras and countries suggests that universal neurodevelopmental factors may explain the effects of gestation at birth. In contrast, mathematic attainment may be improved by schooling. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Universal Gestational Age Effects on Cognitive and Basic Mathematic Processing: 2 Cohorts in 2 Countries

    PubMed Central

    Wolke, Dieter; Strauss, Vicky Yu-Chun; Johnson, Samantha; Gilmore, Camilla; Marlow, Neil; Jaekel, Julia

    2015-01-01

    Objective To determine whether general cognitive ability, basic mathematic processing, and mathematic attainment are universally affected by gestation at birth, as well as whether mathematic attainment is more strongly associated with cohort-specific factors such as schooling than basic cognitive and mathematical abilities. Study design The Bavarian Longitudinal Study (BLS, 1289 children, 27-41 weeks gestational age [GA]) was used to estimate effects of GA on IQ, basic mathematic processing, and mathematic attainment. These estimations were used to predict IQ, mathematic processing, and mathematic attainment in the EPICure Study (171 children <26 weeks GA). Results For children born <34 weeks GA, each lower week decreased IQ and mathematic attainment scores by 2.34 (95% CI: −2.99, −1.70) and 2.76 (95% CI: −3.40, −2.11) points, respectively. There were no differences among children born 34-41 weeks GA. Similarly, for children born <36 weeks GA, mathematic processing scores decreased by 1.77 (95% CI: −2.20, −1.34) points with each lower GA week. The prediction function generated using BLS data accurately predicted the effect of GA on IQ and mathematic processing among EPICure children. However, these children had better attainment than predicted by BLS. Conclusions Prematurity has adverse effects on basic mathematic processing following birth at all gestations <36 weeks and on IQ and mathematic attainment <34 weeks GA. The ability to predict IQ and mathematic processing scores from one cohort to another among children cared for in different eras and countries suggests that universal neurodevelopmental factors may explain the effects of gestation at birth. In contrast, mathematic attainment may be improved by schooling. PMID:25842966

  14. Healthy Work Revisited: Do Changes in Time Strain Predict Well-Being?

    PubMed Central

    Moen, Phyllis; Kelly, Erin L.; Lam, Jack

    2013-01-01

    Building on Karasek and Theorell (R. Karasek & T. Theorell, 1990, Healthy work: Stress, productivity, and the reconstruction of working life, New York, NY: Basic Books), we theorized and tested the relationship between time strain (work-time demands and control) and seven self-reported health outcomes. We drew on survey data from 550 employees fielded before and 6 months after the implementation of an organizational intervention, the Results Only Work Environment (ROWE) in a white-collar organization. Cross-sectional (Wave 1) models showed psychological time demands and time control measures were related to health outcomes in expected directions. The ROWE intervention did not predict changes in psychological time demands by Wave 2, but did predict increased time control (a sense of time adequacy and schedule control). Statistical models revealed increases in psychological time demands and time adequacy predicted changes in positive (energy, mastery, psychological well-being, self-assessed health) and negative (emotional exhaustion, somatic symptoms, psychological distress) outcomes in expected directions, net of job and home demands and covariates. This study demonstrates the value of including time strain in investigations of the health effects of job conditions. Results encourage longitudinal models of change in psychological time demands as well as time control, along with the development and testing of interventions aimed at reducing time strain in different populations of workers. PMID:23506547

  15. Potential for the dynamics of pedestrians in a socially interacting group

    NASA Astrophysics Data System (ADS)

    Zanlungo, Francesco; Ikeda, Tetsushi; Kanda, Takayuki

    2014-01-01

    We introduce a simple potential to describe the dynamics of the relative motion of two pedestrians socially interacting in a walking group. We show that the proposed potential, based on basic empirical observations and theoretical considerations, can qualitatively describe the statistical properties of pedestrian behavior. In detail, we show that the two-dimensional probability distribution of the relative distance is determined by the proposed potential through a Boltzmann distribution. After calibrating the parameters of the model on the two-pedestrian group data, we apply the model to three-pedestrian groups, showing that it describes qualitatively and quantitatively well their behavior. In particular, the model predicts that three-pedestrian groups walk in a V-shaped formation and provides accurate values for the position of the three pedestrians. Furthermore, the model correctly predicts the average walking velocity of three-person groups based on the velocity of two-person ones. Possible extensions to larger groups, along with alternative explanations of the social dynamics that may be implied by our model, are discussed at the end of the paper.

  16. Modelling the perceptual similarity of facial expressions from image statistics and neural responses.

    PubMed

    Sormaz, Mladen; Watson, David M; Smith, William A P; Young, Andrew W; Andrews, Timothy J

    2016-04-01

    The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. An empirical analysis of thermal protective performance of fabrics used in protective clothing.

    PubMed

    Mandal, Sumit; Song, Guowen

    2014-10-01

    Fabric-based protective clothing is widely used for occupational safety of firefighters/industrial workers. The aim of this paper is to study thermal protective performance provided by fabric systems and to propose an effective model for predicting the thermal protective performance under various thermal exposures. Different fabric systems that are commonly used to manufacture thermal protective clothing were selected. Laboratory simulations of the various thermal exposures were created to evaluate the protective performance of the selected fabric systems in terms of time required to generate second-degree burns. Through the characterization of selected fabric systems in a particular thermal exposure, various factors affecting the performances were statistically analyzed. The key factors for a particular thermal exposure were recognized based on the t-test analysis. Using these key factors, the performance predictive multiple linear regression and artificial neural network (ANN) models were developed and compared. The identified best-fit ANN models provide a basic tool to study thermal protective performance of a fabric. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  18. A physical-based gas-surface interaction model for rarefied gas flow simulation

    NASA Astrophysics Data System (ADS)

    Liang, Tengfei; Li, Qi; Ye, Wenjing

    2018-01-01

    Empirical gas-surface interaction models, such as the Maxwell model and the Cercignani-Lampis model, are widely used as the boundary condition in rarefied gas flow simulations. The accuracy of these models in the prediction of macroscopic behavior of rarefied gas flows is less satisfactory in some cases especially the highly non-equilibrium ones. Molecular dynamics simulation can accurately resolve the gas-surface interaction process at atomic scale, and hence can predict accurate macroscopic behavior. They are however too computationally expensive to be applied in real problems. In this work, a statistical physical-based gas-surface interaction model, which complies with the basic relations of boundary condition, is developed based on the framework of the washboard model. In virtue of its physical basis, this new model is capable of capturing some important relations/trends for which the classic empirical models fail to model correctly. As such, the new model is much more accurate than the classic models, and in the meantime is more efficient than MD simulations. Therefore, it can serve as a more accurate and efficient boundary condition for rarefied gas flow simulations.

  19. Predictive value of plasma β2-microglobulin on human body function and senescence.

    PubMed

    Dong, X-M; Cai, R; Yang, F; Zhang, Y-Y; Wang, X-G; Fu, S-L; Zhang, J-R

    2016-06-01

    To explore the correlation between plasma β2-microglobulin (β2-MG) as senescence factor with age, heart, liver and kidney function as well as the predictive value of β2-MG in human metabolism function and senescence. 387 cases of healthy people of different ages were selected and the automatic biochemical analyzer was used to test β2-MG in plasma based on immunoturbidimetry and also all biochemical indexes. The correlation between β2-MG and age, gender and all biochemical indexes was analyzed. β2-MG was positively correlated to age, r = 0.373; and the difference was of statistical significance (p < 0.010). It was significantly negative correlated to HDL-C but positively correlated to LP (a), BUN, CREA, UA, CYS-C, LDH, CK-MB, HBDH, AST, GLB and HCY. β2-MG was closely correlated to age, heart, kidney and liver biochemical indexes, which can be taken as an important biomarker for human body function and anti-senescence and have significant basic research and clinical guidance values.

  20. Annual statistical report 2008 : based on data from CARE/EC

    DOT National Transportation Integrated Search

    2008-10-31

    This Annual Statistical Report provides the basic characteristics of road accidents in 19 member states of : the European Union for the period 1997-2006, on the basis of data collected and processed in the CARE : database, the Community Road Accident...

  1. Country Education Profiles: Algeria.

    ERIC Educational Resources Information Center

    International Bureau of Education, Geneva (Switzerland).

    One of a series of profiles prepared by the Cooperative Educational Abstracting Service, this brief outline provides basic background information on educational principles, system of administration, structure and organization, curricula, and teacher training in Algeria. Statistics provided by the Unesco Office of Statistics show enrollment at all…

  2. 78 FR 23158 - Organization and Delegation of Duties

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-18

    ... management actions of major significance, such as those relating to changes in basic organization pattern... regard to rulemaking, enforcement, vehicle safety research and statistics and data analysis, provides... Administrator for the National Center for Statistics and Analysis, and the Associate Administrator for Vehicle...

  3. A comparison of large-scale climate signals and the North American Multi-Model Ensemble (NMME) for drought prediction in China

    NASA Astrophysics Data System (ADS)

    Xu, Lei; Chen, Nengcheng; Zhang, Xiang

    2018-02-01

    Drought is an extreme natural disaster that can lead to huge socioeconomic losses. Drought prediction ahead of months is helpful for early drought warning and preparations. In this study, we developed a statistical model, two weighted dynamic models and a statistical-dynamic (hybrid) model for 1-6 month lead drought prediction in China. Specifically, statistical component refers to climate signals weighting by support vector regression (SVR), dynamic components consist of the ensemble mean (EM) and Bayesian model averaging (BMA) of the North American Multi-Model Ensemble (NMME) climatic models, and the hybrid part denotes a combination of statistical and dynamic components by assigning weights based on their historical performances. The results indicate that the statistical and hybrid models show better rainfall predictions than NMME-EM and NMME-BMA models, which have good predictability only in southern China. In the 2011 China winter-spring drought event, the statistical model well predicted the spatial extent and severity of drought nationwide, although the severity was underestimated in the mid-lower reaches of Yangtze River (MLRYR) region. The NMME-EM and NMME-BMA models largely overestimated rainfall in northern and western China in 2011 drought. In the 2013 China summer drought, the NMME-EM model forecasted the drought extent and severity in eastern China well, while the statistical and hybrid models falsely detected negative precipitation anomaly (NPA) in some areas. Model ensembles such as multiple statistical approaches, multiple dynamic models or multiple hybrid models for drought predictions were highlighted. These conclusions may be helpful for drought prediction and early drought warnings in China.

  4. Application of artificial intelligence to the management of urological cancer.

    PubMed

    Abbod, Maysam F; Catto, James W F; Linkens, Derek A; Hamdy, Freddie C

    2007-10-01

    Artificial intelligence techniques, such as artificial neural networks, Bayesian belief networks and neuro-fuzzy modeling systems, are complex mathematical models based on the human neuronal structure and thinking. Such tools are capable of generating data driven models of biological systems without making assumptions based on statistical distributions. A large amount of study has been reported of the use of artificial intelligence in urology. We reviewed the basic concepts behind artificial intelligence techniques and explored the applications of this new dynamic technology in various aspects of urological cancer management. A detailed and systematic review of the literature was performed using the MEDLINE and Inspec databases to discover reports using artificial intelligence in urological cancer. The characteristics of machine learning and their implementation were described and reports of artificial intelligence use in urological cancer were reviewed. While most researchers in this field were found to focus on artificial neural networks to improve the diagnosis, staging and prognostic prediction of urological cancers, some groups are exploring other techniques, such as expert systems and neuro-fuzzy modeling systems. Compared to traditional regression statistics artificial intelligence methods appear to be accurate and more explorative for analyzing large data cohorts. Furthermore, they allow individualized prediction of disease behavior. Each artificial intelligence method has characteristics that make it suitable for different tasks. The lack of transparency of artificial neural networks hinders global scientific community acceptance of this method but this can be overcome by neuro-fuzzy modeling systems.

  5. Utilization of Gastrointestinal Simulator, an in Vivo Predictive Dissolution Methodology, Coupled with Computational Approach To Forecast Oral Absorption of Dipyridamole.

    PubMed

    Matsui, Kazuki; Tsume, Yasuhiro; Takeuchi, Susumu; Searls, Amanda; Amidon, Gordon L

    2017-04-03

    Weakly basic drugs exhibit a pH-dependent dissolution profile in the gastrointestinal (GI) tract, which makes it difficult to predict their oral absorption profile. The aim of this study was to investigate the utility of the gastrointestinal simulator (GIS), a novel in vivo predictive dissolution (iPD) methodology, in predicting the in vivo behavior of the weakly basic drug dipyridamole when coupled with in silico analysis. The GIS is a multicompartmental dissolution apparatus, which represents physiological gastric emptying in the fasted state. Kinetic parameters for drug dissolution and precipitation were optimized by fitting a curve to the dissolved drug amount-time profiles in the United States Pharmacopeia apparatus II and GIS. Optimized parameters were incorporated into mathematical equations to describe the mass transport kinetics of dipyridamole in the GI tract. By using this in silico model, intraluminal drug concentration-time profile was simulated. The predicted profile of dipyridamole in the duodenal compartment adequately captured observed data. In addition, the plasma concentration-time profile was also predicted using pharmacokinetic parameters following intravenous administration. On the basis of the comparison with observed data, the in silico approach coupled with the GIS successfully predicted in vivo pharmacokinetic profiles. Although further investigations are still required to generalize, these results indicated that incorporating GIS data into mathematical equations improves the predictability of in vivo behavior of weakly basic drugs like dipyridamole.

  6. When Statistical Literacy Really Matters: Understanding Published Information about the HIV/AIDS Epidemic in South Africa

    ERIC Educational Resources Information Center

    Hobden, Sally

    2014-01-01

    Information on the HIV/AIDS epidemic in Southern Africa is often interpreted through a veil of secrecy and shame and, I argue, with flawed understanding of basic statistics. This research determined the levels of statistical literacy evident in 316 future Mathematical Literacy teachers' explanations of the median in the context of HIV/AIDS…

  7. Introduction to Statistics. Learning Packages in the Policy Sciences Series, PS-26. Revised Edition.

    ERIC Educational Resources Information Center

    Policy Studies Associates, Croton-on-Hudson, NY.

    The primary objective of this booklet is to introduce students to basic statistical skills that are useful in the analysis of public policy data. A few, selected statistical methods are presented, and theory is not emphasized. Chapter 1 provides instruction for using tables, bar graphs, bar graphs with grouped data, trend lines, pie diagrams,…

  8. Should I Pack My Umbrella? Clinical versus Statistical Prediction of Mental Health Decisions

    ERIC Educational Resources Information Center

    Aegisdottir, Stefania; Spengler, Paul M.; White, Michael J.

    2006-01-01

    In this rejoinder, the authors respond to the insightful commentary of Strohmer and Arm, Chwalisz, and Hilton, Harris, and Rice about the meta-analysis on statistical versus clinical prediction techniques for mental health judgments. The authors address issues including the availability of statistical prediction techniques for real-life psychology…

  9. 78 FR 70303 - Announcement of Requirements and Registration for the Predict the Influenza Season Challenge

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-25

    ... public. Mathematical and statistical models can be useful in predicting the timing and impact of the... applying any mathematical, statistical, or other approach to predictive modeling. This challenge will... Services (HHS) region level(s) in the United States by developing mathematical and statistical models that...

  10. Basic self-disturbance predicts psychosis onset in the ultra high risk for psychosis "prodromal" population.

    PubMed

    Nelson, Barnaby; Thompson, Andrew; Yung, Alison R

    2012-11-01

    Phenomenological research indicates that disturbance of the basic sense of self may be a core phenotypic marker of schizophrenia spectrum disorders. Basic self-disturbance refers to a disruption of the sense of ownership of experience and agency of action and is associated with a variety of anomalous subjective experiences. In this study, we investigated the presence of basic self-disturbance in an "ultra high risk" (UHR) for psychosis sample compared with a healthy control sample and whether it predicted transition to psychotic disorder. Forty-nine UHR patients and 52 matched healthy control participants were recruited to the study. Participants were assessed for basic self-disturbance using the Examination of Anomalous Self-Experience (EASE) instrument. UHR participants were followed for a mean of 569 days. Levels of self-disturbance were significantly higher in the UHR sample compared with the healthy control sample (P < .001). Cox regression indicated that total EASE score significantly predicted time to transition (P < .05) when other significant predictors were controlled for. Exploratory analyses indicated that basic self-disturbance scores were higher in schizophrenia spectrum cases, irrespective of transition to psychosis, than nonschizophrenia spectrum cases. The results indicate that identifying basic self-disturbance in the UHR population may provide a means of further "closing in" on individuals truly at high risk of psychotic disorder, particularly of schizophrenia spectrum disorders. This may be of practical value by reducing inclusion of "false positive" cases in UHR samples and of theoretical value by shedding light on core phenotypic features of schizophrenia spectrum pathology.

  11. The Thurgood Marshall School of Law Empirical Findings: A Report of the 2012 Friday Academy Attendance and Statistical Comparisons of 1L GPA (Predicted and Actual)

    ERIC Educational Resources Information Center

    Kadhi, T.; Rudley, D.; Holley, D.; Krishna, K.; Ogolla, C.; Rene, E.; Green, T.

    2010-01-01

    The following report of descriptive statistics addresses the attendance of the 2012 class and the average Actual and Predicted 1L Grade Point Averages (GPAs). Correlational and Inferential statistics are also run on the variables of Attendance (Y/N), Attendance Number of Times, Actual GPA, and Predictive GPA (Predictive GPA is defined as the Index…

  12. 75 FR 33203 - Funding Formula for Grants to States

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-11

    ... as Social Security numbers, birth dates, and medical data. Docket: To read or download submissions or... Local Area Unemployment Statistics (LAUS), both of which are compiled by DOL's Bureau of Labor Statistics. Specifies how each State's basic JVSG allocation is calculated. Identifies the procedures...

  13. Statistical Considerations for Establishing CBTE Cut-Off Scores.

    ERIC Educational Resources Information Center

    Trzasko, Joseph A.

    This report gives the basic definition and purpose of competency-based teacher education (CBTE) cut-off scores. It describes the basic characteristics of CBTE as a yes-no dichotomous decision regarding the presence of a specific ability or knowledge, which necesitates the establishment of a cut-off point to designate competency vs. incompetency on…

  14. ADULT BASIC EDUCATION. PROGRAM SUMMARY.

    ERIC Educational Resources Information Center

    Office of Education (DHEW), Washington, DC.

    A BRIEF DESCRIPTION IS GIVEN OF THE FEDERAL ADULT BASIC EDUCATION PROGRAM, UNDER THE ADULT EDUCATION ACT OF 1966, AT THE NATIONAL AND STATE LEVELS (INCLUDING PUERTO RICO, GUAM, AMERICAN SAMOA, AND THE VIRGIN ISLANDS) AS PROVIDED BY STATE EDUCATION AGENCIES. STATISTICS FOR FISCAL YEARS 1965 AND 1966, AND ESTIMATES FOR FISCAL YEAR 1967, INDICATE…

  15. Action Research of Computer-Assisted-Remediation of Basic Research Concepts.

    ERIC Educational Resources Information Center

    Packard, Abbot L.; And Others

    This study investigated the possibility of creating a computer-assisted remediation program to assist students having difficulties in basic college research and statistics courses. A team approach involving instructors and students drove the research into and creation of the computer program. The effect of student use was reviewed by looking at…

  16. Introduction to Probability, Part 1 - Basic Concepts. Student Text. Revised Edition.

    ERIC Educational Resources Information Center

    Blakeslee, David W.; And Others

    This book is designed to introduce the reader to some fundamental ideas about probability. The mathematical theory of probability plays an increasingly important role in science, government, industry, business, and economics. An understanding of the basic concepts of probability is essential for the study of statistical methods that are widely…

  17. 77 FR 37059 - Draft Guidance for Industry on Active Controls in Studies To Demonstrate Effectiveness of a New...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-20

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2012-D-0419... who conduct studies using active controls and have a basic understanding of statistical principles... clinical investigators who conduct studies using active controls and have a basic understanding of...

  18. Combining statistical inference and decisions in ecology.

    PubMed

    Williams, Perry J; Hooten, Mevin B

    2016-09-01

    Statistical decision theory (SDT) is a sub-field of decision theory that formally incorporates statistical investigation into a decision-theoretic framework to account for uncertainties in a decision problem. SDT provides a unifying analysis of three types of information: statistical results from a data set, knowledge of the consequences of potential choices (i.e., loss), and prior beliefs about a system. SDT links the theoretical development of a large body of statistical methods, including point estimation, hypothesis testing, and confidence interval estimation. The theory and application of SDT have mainly been developed and published in the fields of mathematics, statistics, operations research, and other decision sciences, but have had limited exposure in ecology. Thus, we provide an introduction to SDT for ecologists and describe its utility for linking the conventionally separate tasks of statistical investigation and decision making in a single framework. We describe the basic framework of both Bayesian and frequentist SDT, its traditional use in statistics, and discuss its application to decision problems that occur in ecology. We demonstrate SDT with two types of decisions: Bayesian point estimation and an applied management problem of selecting a prescribed fire rotation for managing a grassland bird species. Central to SDT, and decision theory in general, are loss functions. Thus, we also provide basic guidance and references for constructing loss functions for an SDT problem. © 2016 by the Ecological Society of America.

  19. Relative costs of prebasic and prealternate molts for male blue-winged teal

    USGS Publications Warehouse

    Hohman, W.L.; Manley, S.W.; Richard, D.

    1997-01-01

    We compared masses of definitive basic and alternate plumages of male Blue-winged Teal (Anas discors) to evaluate the hypothesis that nutritional investments in basic and alternate plumages are related to the duration that plumages are worn and to assess the relative costs of prebasic and prealternate molts. Because these plumages are worn by males for approximately equal durations, we predicted that masses of the basic and alternate body plumages would be similar. To assess nutritional stress (demands greater than available resources) associated with molt, we examined the relation between remigial length and structural size and compared predicted and observed plum-age masses of Blue-winged Teal and other ducks. If birds were nutritionally challenged during remigial molt, then we predicted remigial length would be influenced by nutrition rather than size, and remigial length and size would be unrelated. Alternate body plumage of male Blue-winged Teal weighed about 10% more than the basic body plumage; however, masses of both plumages were less than that predicted on the basis of lean body mass. We argue that deviations between observed and predicted plumage masses were related to factors other than nutrition. Further, remigial lengths were significantly, albeit weakly, related to structural size. We therefore concluded that, although the potential for molt-induced stress may be greatest in small-bodied waterfowl species, there was no clear evidence that molting male Blue-winged Teal were nutritionally stressed. ?? The Cooper Ornithological Society 1997.

  20. Peers versus professional training of basic life support in Syria: a randomized controlled trial.

    PubMed

    Abbas, Fatima; Sawaf, Bisher; Hanafi, Ibrahem; Hajeer, Mohammad Younis; Zakaria, Mhd Ismael; Abbas, Wafaa; Alabdeh, Fadi; Ibrahim, Nazir

    2018-06-18

    Peer training has been identified as a useful tool for delivering undergraduate training in basic life support (BLS) which is fundamental as an initial response in cases of emergency. This study aimed to (1) Evaluate the efficacy of peer-led model in basic life support training among medical students in their first three years of study, compared to professional-led training and (2) To assess the efficacy of the course program and students' satisfaction of peer-led training. A randomized controlled trial with blinded assessors was conducted on 72 medical students from the pre-clinical years (1st to 3rd years in Syria) at Syrian Private University. Students were randomly assigned to peer-led or to professional-led training group for one-day-course of basic life support skills. Sixty-four students who underwent checklist based assessment using objective structured clinical examination design (OSCE) (practical assessment of BLS skills) and answered BLS knowledge checkpoint-questionnaire were included in the analysis. There was no statistically significant difference between the two groups in delivering BLS skills to medical students in practical (P = 0.850) and BLS knowledge questionnaire outcomes (P = 0.900). Both groups showed statistically significant improvement from pre- to post-course assessment with significant statistical difference in both practical skills and theoretical knowledge (P-Value < 0.001). Students were satisfied with the peer model of training. Peer-led training of basic life support for medical students was beneficial and it provided a quality of education which was as effective as training conducted by professionals. This method is applicable and desirable especially in poor-resource countries and in crisis situation.

  1. An adaptive approach to the dynamic allocation of buffer storage. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Crooke, S. C.

    1970-01-01

    Several strategies for the dynamic allocation of buffer storage are simulated and compared. The basic algorithms investigated, using actual statistics observed in the Univac 1108 EXEC 8 System, include the buddy method and the first-fit method. Modifications are made to the basic methods in an effort to improve and to measure allocation performance. A simulation model of an adaptive strategy is developed which permits interchanging the two different methods, the buddy and the first-fit methods with some modifications. Using an adaptive strategy, each method may be employed in the statistical environment in which its performance is superior to the other method.

  2. Ultrasound Dopplerography of abdomen pathology using statistical computer programs

    NASA Astrophysics Data System (ADS)

    Dmitrieva, Irina V.; Arakelian, Sergei M.; Wapota, Alberto R. W.

    1998-04-01

    The modern ultrasound dopplerography give us the big possibilities in investigation of gemodynamical changes in all stages of abdomen pathology. Many of researches devoted to using of noninvasive methods in practical medicine. Now ultrasound Dopplerography is one of the basic one. We investigated 250 patients from 30 to 77 ages, including 149 men and 101 women. The basic diagnosis of all patients was the Ischaemic Pancreatitis. The Second diagnoses of pathology were the Ischaemic Disease of Heart, Gypertension, Atherosclerosis, Diabet, Vascular Disease of Extremities. We researched the abdominal aorta and her branches: Arteria Mesenterica Superior (AMS), truncus coeliacus (TC), arteria hepatica communis (AHC), arteria lienalis (AL). For investigation we use the following equipment: ACUSON 128 XP/10c, BIOMEDIC, GENERAL ELECTRIC (USA, Japan). We analyzed the following componetns of gemodynamical changes of abdominal vessels: index of pulsation, index of resistance, ratio of systol-dystol, speed of blood circulation. Statistical program included the following one: 'basic statistic's,' 'analytic program.' In conclusion we determined that the all gemodynamical components of abdominal vessels had considerable changes in abdominal ischaemia than in normal situation. Using the computer's program for definition degree of gemodynamical changes, we can recommend the individual plan of diagnostical and treatment program.

  3. Physics of negative absolute temperatures.

    PubMed

    Abraham, Eitan; Penrose, Oliver

    2017-01-01

    Negative absolute temperatures were introduced into experimental physics by Purcell and Pound, who successfully applied this concept to nuclear spins; nevertheless, the concept has proved controversial: a recent article aroused considerable interest by its claim, based on a classical entropy formula (the "volume entropy") due to Gibbs, that negative temperatures violated basic principles of statistical thermodynamics. Here we give a thermodynamic analysis that confirms the negative-temperature interpretation of the Purcell-Pound experiments. We also examine the principal arguments that have been advanced against the negative temperature concept; we find that these arguments are not logically compelling, and moreover that the underlying "volume" entropy formula leads to predictions inconsistent with existing experimental results on nuclear spins. We conclude that, despite the counterarguments, negative absolute temperatures make good theoretical sense and did occur in the experiments designed to produce them.

  4. Resilience Among Students at the Basic Enlisted Submarine School

    DTIC Science & Technology

    2016-12-01

    reported resilience. The Hayes’ Macro in the Statistical Package for the Social Sciences (SSPS) was used to uncover factors relevant to mediation analysis... Statistical Package for the Social Sciences (SPSS) was used to uncover factors relevant to mediation analysis. Findings suggest that the encouragement of...to Stressful Experiences Scale RTC Recruit Training Command SPSS Statistical Package for the Social Sciences SS Social Support SWB Subjective Well

  5. A test of basic psychological needs theory in young soccer players: time-lagged design at the individual and team levels.

    PubMed

    González, L; Tomás, I; Castillo, I; Duda, J L; Balaguer, I

    2017-11-01

    Within the framework of basic psychological needs theory (Deci & Ryan, 2000), multilevel structural equation modeling (MSEM) with a time-lagged design was used to test a mediation model examining the relationship between perceptions of coaches' interpersonal styles (autonomy supportive and controlling), athletes' basic psychological needs (satisfaction and thwarting), and indicators of well-being (subjective vitality) and ill-being (burnout), estimating separately between and within effects. The participants were 597 Spanish male soccer players aged between 11 and 14 years (M = 12.57, SD = 0.54) from 40 teams who completed a questionnaire package at two time points in a competitive season. Results revealed that at the individual level, athletes' perceptions of autonomy support positively predicted athletes' need satisfaction (autonomy, competence, and relatedness), whereas athletes' perceptions of controlling style positively predicted athletes' need thwarting (autonomy, competence, and relatedness). In turn, all three athletes' need satisfaction dimensions predicted athletes' subjective vitality and burnout (positively and negatively, respectively), whereas competence thwarting negatively predicted subjective vitality and competence and relatedness positively predicted burnout. At the team level, team perceptions of autonomy supportive style positively predicted team autonomy and relatedness satisfaction. Mediation effects only appeared at the individual level. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. 76 FR 41756 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-15

    ... materials and supplies used in production. The economic census will produce basic statistics by kind of business on number of establishments, sales, payroll, employment, inventories, and operating expenses. It also will yield a variety of subject statistics, including sales by product line; sales by class of...

  7. Descriptive Statistics: Reporting the Answers to the 5 Basic Questions of Who, What, Why, When, Where, and a Sixth, So What?

    PubMed

    Vetter, Thomas R

    2017-11-01

    Descriptive statistics are specific methods basically used to calculate, describe, and summarize collected research data in a logical, meaningful, and efficient way. Descriptive statistics are reported numerically in the manuscript text and/or in its tables, or graphically in its figures. This basic statistical tutorial discusses a series of fundamental concepts about descriptive statistics and their reporting. The mean, median, and mode are 3 measures of the center or central tendency of a set of data. In addition to a measure of its central tendency (mean, median, or mode), another important characteristic of a research data set is its variability or dispersion (ie, spread). In simplest terms, variability is how much the individual recorded scores or observed values differ from one another. The range, standard deviation, and interquartile range are 3 measures of variability or dispersion. The standard deviation is typically reported for a mean, and the interquartile range for a median. Testing for statistical significance, along with calculating the observed treatment effect (or the strength of the association between an exposure and an outcome), and generating a corresponding confidence interval are 3 tools commonly used by researchers (and their collaborating biostatistician or epidemiologist) to validly make inferences and more generalized conclusions from their collected data and descriptive statistics. A number of journals, including Anesthesia & Analgesia, strongly encourage or require the reporting of pertinent confidence intervals. A confidence interval can be calculated for virtually any variable or outcome measure in an experimental, quasi-experimental, or observational research study design. Generally speaking, in a clinical trial, the confidence interval is the range of values within which the true treatment effect in the population likely resides. In an observational study, the confidence interval is the range of values within which the true strength of the association between the exposure and the outcome (eg, the risk ratio or odds ratio) in the population likely resides. There are many possible ways to graphically display or illustrate different types of data. While there is often latitude as to the choice of format, ultimately, the simplest and most comprehensible format is preferred. Common examples include a histogram, bar chart, line chart or line graph, pie chart, scatterplot, and box-and-whisker plot. Valid and reliable descriptive statistics can answer basic yet important questions about a research data set, namely: "Who, What, Why, When, Where, How, How Much?"

  8. Basic traits predict the prevalence of personality disorder across the life span: the example of psychopathy.

    PubMed

    Vachon, David D; Lynam, Donald R; Widiger, Thomas A; Miller, Joshua D; McCrae, Robert R; Costa, Paul T

    2013-05-01

    Personality disorders (PDs) may be better understood in terms of dimensions of general personality functioning rather than as discrete categorical conditions. Personality-trait descriptions of PDs are robust across methods and settings, and PD assessments based on trait measures show good construct validity. The study reported here extends research showing that basic traits (e.g., impulsiveness, warmth, straightforwardness, modesty, and deliberation) can re-create the epidemiological characteristics associated with PDs. Specifically, we used normative changes in absolute trait levels to simulate age-related differences in the prevalence of psychopathy in a forensic setting. Results demonstrated that trait information predicts the rate of decline for psychopathy over the life span; discriminates the decline of psychopathy from that of a similar disorder, antisocial PD; and accurately predicts the differential decline of subfactors of psychopathy. These findings suggest that basic traits provide a parsimonious account of PD prevalence across the life span.

  9. A two-component rain model for the prediction of attenuation statistics

    NASA Technical Reports Server (NTRS)

    Crane, R. K.

    1982-01-01

    A two-component rain model has been developed for calculating attenuation statistics. In contrast to most other attenuation prediction models, the two-component model calculates the occurrence probability for volume cells or debris attenuation events. The model performed significantly better than the International Radio Consultative Committee model when used for predictions on earth-satellite paths. It is expected that the model will have applications in modeling the joint statistics required for space diversity system design, the statistics of interference due to rain scatter at attenuating frequencies, and the duration statistics for attenuation events.

  10. Comparison of Basic and Ensemble Data Mining Methods in Predicting 5-Year Survival of Colorectal Cancer Patients.

    PubMed

    Pourhoseingholi, Mohamad Amin; Kheirian, Sedigheh; Zali, Mohammad Reza

    2017-12-01

    Colorectal cancer (CRC) is one of the most common malignancies and cause of cancer mortality worldwide. Given the importance of predicting the survival of CRC patients and the growing use of data mining methods, this study aims to compare the performance of models for predicting 5-year survival of CRC patients using variety of basic and ensemble data mining methods. The CRC dataset from The Shahid Beheshti University of Medical Sciences Research Center for Gastroenterology and Liver Diseases were used for prediction and comparative study of the base and ensemble data mining techniques. Feature selection methods were used to select predictor attributes for classification. The WEKA toolkit and MedCalc software were respectively utilized for creating and comparing the models. The obtained results showed that the predictive performance of developed models was altogether high (all greater than 90%). Overall, the performance of ensemble models was higher than that of basic classifiers and the best result achieved by ensemble voting model in terms of area under the ROC curve (AUC= 0.96). AUC Comparison of models showed that the ensemble voting method significantly outperformed all models except for two methods of Random Forest (RF) and Bayesian Network (BN) considered the overlapping 95% confidence intervals. This result may indicate high predictive power of these two methods along with ensemble voting for predicting 5-year survival of CRC patients.

  11. ON THE DYNAMICAL DERIVATION OF EQUILIBRIUM STATISTICAL MECHANICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prigogine, I.; Balescu, R.; Henin, F.

    1960-12-01

    Work on nonequilibrium statistical mechanics, which allows an extension of the kinetic proof to all results of equilibrium statistical mechanics involving a finite number of degrees of freedom, is summarized. As an introduction to the general N-body problem, the scattering theory in classical mechanics is considered. The general N-body problem is considered for the case of classical mechanics, quantum mechanics with Boltzmann statistics, and quantum mechanics including quantum statistics. Six basic diagrams, which describe the elementary processes of the dynamics of correlations, were obtained. (M.C.G.)

  12. `New insight into statistical hydrology' preface to the special issue

    NASA Astrophysics Data System (ADS)

    Kochanek, Krzysztof

    2018-04-01

    Statistical methods are still the basic tool for investigating random, extreme events occurring in hydrosphere. On 21-22 September 2017, in Warsaw (Poland) the international workshop of the Statistical Hydrology (StaHy) 2017 took place under the auspices of the International Association of Hydrological Sciences. The authors of the presentations proposed to publish their research results in the Special Issue of the Acta Geophysica-`New Insight into Statistical Hydrology'. Five papers were selected for publication, touching on the most crucial issues of statistical methodology in hydrology.

  13. Prediction of paroxysmal atrial fibrillation using recurrence plot-based features of the RR-interval signal.

    PubMed

    Mohebbi, Maryam; Ghassemian, Hassan

    2011-08-01

    Atrial fibrillation (AF) is the most common cardiac arrhythmia and increases the risk of stroke. Predicting the onset of paroxysmal AF (PAF), based on noninvasive techniques, is clinically important and can be invaluable in order to avoid useless therapeutic intervention and to minimize risks for the patients. In this paper, we propose an effective PAF predictor which is based on the analysis of the RR-interval signal. This method consists of three steps: preprocessing, feature extraction and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the RR-interval signal is extracted. In the next step, the recurrence plot (RP) of the RR-interval signal is obtained and five statistically significant features are extracted to characterize the basic patterns of the RP. These features consist of the recurrence rate, length of longest diagonal segments (L(max )), average length of the diagonal lines (L(mean)), entropy, and trapping time. Recurrence quantification analysis can reveal subtle aspects of dynamics not easily appreciated by other methods and exhibits characteristic patterns which are caused by the typical dynamical behavior. In the final step, a support vector machine (SVM)-based classifier is used for PAF prediction. The performance of the proposed method in prediction of PAF episodes was evaluated using the Atrial Fibrillation Prediction Database (AFPDB) which consists of both 30 min ECG recordings that end just prior to the onset of PAF and segments at least 45 min distant from any PAF events. The obtained sensitivity, specificity, positive predictivity and negative predictivity were 97%, 100%, 100%, and 96%, respectively. The proposed methodology presents better results than other existing approaches.

  14. How to interpret a small increase in AUC with an additional risk prediction marker: decision analysis comes through.

    PubMed

    Baker, Stuart G; Schuit, Ewoud; Steyerberg, Ewout W; Pencina, Michael J; Vickers, Andrew; Vickers, Andew; Moons, Karel G M; Mol, Ben W J; Lindeman, Karen S

    2014-09-28

    An important question in the evaluation of an additional risk prediction marker is how to interpret a small increase in the area under the receiver operating characteristic curve (AUC). Many researchers believe that a change in AUC is a poor metric because it increases only slightly with the addition of a marker with a large odds ratio. Because it is not possible on purely statistical grounds to choose between the odds ratio and AUC, we invoke decision analysis, which incorporates costs and benefits. For example, a timely estimate of the risk of later non-elective operative delivery can help a woman in labor decide if she wants an early elective cesarean section to avoid greater complications from possible later non-elective operative delivery. A basic risk prediction model for later non-elective operative delivery involves only antepartum markers. Because adding intrapartum markers to this risk prediction model increases AUC by 0.02, we questioned whether this small improvement is worthwhile. A key decision-analytic quantity is the risk threshold, here the risk of later non-elective operative delivery at which a patient would be indifferent between an early elective cesarean section and usual care. For a range of risk thresholds, we found that an increase in the net benefit of risk prediction requires collecting intrapartum marker data on 68 to 124 women for every correct prediction of later non-elective operative delivery. Because data collection is non-invasive, this test tradeoff of 68 to 124 is clinically acceptable, indicating the value of adding intrapartum markers to the risk prediction model. Copyright © 2014 John Wiley & Sons, Ltd.

  15. ASPEN-AND-ESPEN: A postacute-care comparison of the basic definition of malnutrition from the American Society of Parenteral and Enteral Nutrition and Academy of Nutrition and Dietetics with the European Society for Clinical Nutrition and Metabolism definition.

    PubMed

    Sánchez-Rodríguez, Dolores; Marco, Ester; Ronquillo-Moreno, Natalia; Maciel-Bravo, Liev; Gonzales-Carhuancho, Abel; Duran, Xavier; Guillén-Solà, Anna; Vázquez-Ibar, Olga; Escalada, Ferran; Muniesa, Josep M

    2018-01-25

    The aim of this study was to assess the prevalence of malnutrition by applying the ASPEN/AND definition and the ESPEN consensus definition in a postacute-care population, and secondly, to determine the metrological properties of the set of six clinical characteristics that constitute the ASPEN/AND basic diagnosis, compared to the ESPEN consensus, based mostly on objective anthropometric measurements. Prospective study of 84 consecutive deconditioned older inpatients (85.4 ± 6.2; 59.5% women) admitted for rehabilitation in postacute care. ASPEN/AND diagnosis of malnutrition was considered in presence of at least two of the following: low energy intake, fluid accumulation, diminished handgrip strength, and loss of weight, muscle mass, or subcutaneous fat. Sensitivity, specificity, positive and negative predictive values, accuracy, likelihood ratios, and kappa statistics were calculated for ASPEN/AND criteria and compared with ESPEN consensus. The prevalence of malnutrition by ASPEN/AND criteria was 63.1% and by ESPEN consensus, 20.2%; both diagnoses were associated with significantly longer length of stay, but the ESPEN definition was significantly associated with poorer functional outcomes after the rehabilitation program. Compared to ESPEN consensus, ASPEN/AND diagnosis showed fair validity (sensitivity = 94.1%; specificity = 44.8%); kappa statistic was 2.217. Applying the ASPEN/AND definition obtained a higher prevalence of malnutrition in a postacute-care population than was identified by the ESPEN definition. ASPEN/AND criteria had fair validity and agreement compared with the ESPEN definition. A simple, evidence-based, unified malnutrition definition might improve geriatric care. Copyright © 2018 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  16. Pediatric outcomes data collection instrument scores in ambulatory children with cerebral palsy: an analysis by age groups and severity level.

    PubMed

    Barnes, Douglas; Linton, Judith L; Sullivan, Elroy; Bagley, Anita; Oeffinger, Donna; Abel, Mark; Damiano, Diane; Gorton, George; Nicholson, Diane; Romness, Mark; Rogers, Sarah; Tylkowski, Chester

    2008-01-01

    The Pediatric Outcomes Data Collection Instrument (PODCI) was developed in 1994 as a patient-based tool for use across a broad age range and wide array of musculoskeletal disorders, including children with cerebral palsy (CP). The purpose of this study was to establish means and SDs of the Parent PODCI measures by age groups and Gross Motor Function Classification System (GMFCS) levels for ambulatory children with CP. This instrument was one of several studied in a prospective, multicenter project of ambulatory patients with CP between the aged 4 and 18 years and GMFCS levels I through III. Participants included 338 boys and 221 girls at a mean age of 11.1 years, with 370 diplegic, 162 hemiplegic, and 27 quadriplegic. Both baseline and follow-up data sets of the completed Parent PODCI responses were statistically analyzed. Age was identified as a significant predictor of the PODCI measures of Upper Extremity Function, Transfers and Basic Mobility, Global Function, and Happiness With Physical Condition. Gross Motor Function Classification System levels was a significant predictor of Transfers and Basic Mobility, Sports and Physical Function, and Global Function. Pattern of involvement, sex, and prior orthopaedic surgery were not statistically significant predictors for any of the Parent PODCI measures. Mean and SD scores were calculated for age groups stratified by GMFCS levels. Analysis of the follow-up data set validated the findings derived from the baseline data. Linear regression equations were derived, with age as a continuous variable and GMFCS levels as a categorical variable, to be used for Parent PODCI predicted scores. The results of this study provide clinicians and researchers with a set of Parent PODCI values for comparison to age- and severity-matched populations of ambulatory patients with CP.

  17. Data Mining CMMSs: How to Convert Data into Knowledge.

    PubMed

    Fennigkoh, Larry; Nanney, D Courtney

    2018-01-01

    Although the healthcare technology management (HTM) community has decades of accumulated medical device-related maintenance data, little knowledge has been gleaned from these data. Finding and extracting such knowledge requires the use of the well-established, but admittedly somewhat foreign to HTM, application of inferential statistics. This article sought to provide a basic background on inferential statistics and describe a case study of their application, limitations, and proper interpretation. The research question associated with this case study involved examining the effects of ventilator preventive maintenance (PM) labor hours, age, and manufacturer on needed unscheduled corrective maintenance (CM) labor hours. The study sample included more than 21,000 combined PM inspections and CM work orders on 2,045 ventilators from 26 manufacturers during a five-year period (2012-16). A multiple regression analysis revealed that device age, manufacturer, and accumulated PM inspection labor hours all influenced the amount of CM labor significantly (P < 0.001). In essence, CM labor hours increased with increasing PM labor. However, and despite the statistical significance of these predictors, the regression analysis also indicated that ventilator age, manufacturer, and PM labor hours only explained approximately 16% of all variability in CM labor, with the remainder (84%) caused by other factors that were not included in the study. As such, the regression model obtained here is not suitable for predicting ventilator CM labor hours.

  18. On prognostic models, artificial intelligence and censored observations.

    PubMed

    Anand, S S; Hamilton, P W; Hughes, J G; Bell, D A

    2001-03-01

    The development of prognostic models for assisting medical practitioners with decision making is not a trivial task. Models need to possess a number of desirable characteristics and few, if any, current modelling approaches based on statistical or artificial intelligence can produce models that display all these characteristics. The inability of modelling techniques to provide truly useful models has led to interest in these models being purely academic in nature. This in turn has resulted in only a very small percentage of models that have been developed being deployed in practice. On the other hand, new modelling paradigms are being proposed continuously within the machine learning and statistical community and claims, often based on inadequate evaluation, being made on their superiority over traditional modelling methods. We believe that for new modelling approaches to deliver true net benefits over traditional techniques, an evaluation centric approach to their development is essential. In this paper we present such an evaluation centric approach to developing extensions to the basic k-nearest neighbour (k-NN) paradigm. We use standard statistical techniques to enhance the distance metric used and a framework based on evidence theory to obtain a prediction for the target example from the outcome of the retrieved exemplars. We refer to this new k-NN algorithm as Censored k-NN (Ck-NN). This reflects the enhancements made to k-NN that are aimed at providing a means for handling censored observations within k-NN.

  19. OPR-PPR, a Computer Program for Assessing Data Importance to Model Predictions Using Linear Statistics

    USGS Publications Warehouse

    Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.

    2007-01-01

    The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.

  20. Comparison of the predictive validity of diagnosis-based risk adjusters for clinical outcomes.

    PubMed

    Petersen, Laura A; Pietz, Kenneth; Woodard, LeChauncy D; Byrne, Margaret

    2005-01-01

    Many possible methods of risk adjustment exist, but there is a dearth of comparative data on their performance. We compared the predictive validity of 2 widely used methods (Diagnostic Cost Groups [DCGs] and Adjusted Clinical Groups [ACGs]) for 2 clinical outcomes using a large national sample of patients. We studied all patients who used Veterans Health Administration (VA) medical services in fiscal year (FY) 2001 (n = 3,069,168) and assigned both a DCG and an ACG to each. We used logistic regression analyses to compare predictive ability for death or long-term care (LTC) hospitalization for age/gender models, DCG models, and ACG models. We also assessed the effect of adding age to the DCG and ACG models. Patients in the highest DCG categories, indicating higher severity of illness, were more likely to die or to require LTC hospitalization. Surprisingly, the age/gender model predicted death slightly more accurately than the ACG model (c-statistic of 0.710 versus 0.700, respectively). The addition of age to the ACG model improved the c-statistic to 0.768. The highest c-statistic for prediction of death was obtained with a DCG/age model (0.830). The lowest c-statistics were obtained for age/gender models for LTC hospitalization (c-statistic 0.593). The c-statistic for use of ACGs to predict LTC hospitalization was 0.783, and improved to 0.792 with the addition of age. The c-statistics for use of DCGs and DCG/age to predict LTC hospitalization were 0.885 and 0.890, respectively, indicating the best prediction. We found that risk adjusters based upon diagnoses predicted an increased likelihood of death or LTC hospitalization, exhibiting good predictive validity. In this comparative analysis using VA data, DCG models were generally superior to ACG models in predicting clinical outcomes, although ACG model performance was enhanced by the addition of age.

  1. Predicting juvenile recidivism: new method, old problems.

    PubMed

    Benda, B B

    1987-01-01

    This prediction study compared three statistical procedures for accuracy using two assessment methods. The criterion is return to a juvenile prison after the first release, and the models tested are logit analysis, predictive attribute analysis, and a Burgess procedure. No significant differences are found between statistics in prediction.

  2. Statistics for the Relative Detectability of Chemicals in Weak Gaseous Plumes in LWIR Hyperspectral Imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metoyer, Candace N.; Walsh, Stephen J.; Tardiff, Mark F.

    2008-10-30

    The detection and identification of weak gaseous plumes using thermal imaging data is complicated by many factors. These include variability due to atmosphere, ground and plume temperature, and background clutter. This paper presents an analysis of one formulation of the physics-based model that describes the at-sensor observed radiance. The motivating question for the analyses performed in this paper is as follows. Given a set of backgrounds, is there a way to predict the background over which the probability of detecting a given chemical will be the highest? Two statistics were developed to address this question. These statistics incorporate data frommore » the long-wave infrared band to predict the background over which chemical detectability will be the highest. These statistics can be computed prior to data collection. As a preliminary exploration into the predictive ability of these statistics, analyses were performed on synthetic hyperspectral images. Each image contained one chemical (either carbon tetrachloride or ammonia) spread across six distinct background types. The statistics were used to generate predictions for the background ranks. Then, the predicted ranks were compared to the empirical ranks obtained from the analyses of the synthetic images. For the simplified images under consideration, the predicted and empirical ranks showed a promising amount of agreement. One statistic accurately predicted the best and worst background for detection in all of the images. Future work may include explorations of more complicated plume ingredients, background types, and noise structures.« less

  3. Solar-terrestrial predictions proceedings. Volume 4: Prediction of terrestrial effects of solar activity

    NASA Technical Reports Server (NTRS)

    Donnelly, R. E. (Editor)

    1980-01-01

    Papers about prediction of ionospheric and radio propagation conditions based primarily on empirical or statistical relations is discussed. Predictions of sporadic E, spread F, and scintillations generally involve statistical or empirical predictions. The correlation between solar-activity and terrestrial seismic activity and the possible relation between solar activity and biological effects is discussed.

  4. The Social Profile of Students in Basic General Education in Ecuador: A Data Analysis

    ERIC Educational Resources Information Center

    Buri, Olga Elizabeth Minchala; Stefos, Efstathios

    2017-01-01

    The objective of this study is to examine the social profile of students who are enrolled in Basic General Education in Ecuador. Both a descriptive and multidimensional statistical analysis was carried out based on the data provided by the National Survey of Employment, Unemployment and Underemployment in 2015. The descriptive analysis shows the…

  5. Improving Attendance and Punctuality of FE Basic Skill Students through an Innovative Scheme

    ERIC Educational Resources Information Center

    Ade-Ojo, Gordon O.

    2005-01-01

    This paper reports the findings of a study set up to establish the impact of a particular scheme on the attendance and punctuality performance of a group of Basic Skills learners against the backdrop of various theoretical postulations on managing undesirable behavior. Data collected on learners' performance was subjected to statistical analysis…

  6. Statistical Match of the VA 1979-1980 Recipient File against the 1979-1980 Basic Grant Recipient File. Revised.

    ERIC Educational Resources Information Center

    Applied Management Sciences, Inc., Silver Spring, MD.

    The amount of misreporting of Veterans Administration (VA) benefits was assessed, along with the impact of misreporting on the Basic Educational Opportunity Grant (BEOG) program. Accurate financial information is need to determine appropriate awards. The analysis revealed: over 97% of VA beneficiaries misreported benefits; the total net loss to…

  7. An Inspection on the Gini Coefficient of the Budget Educational Public Expenditure per Student for China's Basic Education

    ERIC Educational Resources Information Center

    Yingxiu, Yang

    2006-01-01

    Using statistical data on the implementing conditions of China's educational expenditure published by the state, this paper studies the Gini coefficient of the budget educational public expenditure per student in order to examine the concentration degree of the educational expenditure for China's basic education and analyze its balanced…

  8. Trees for Ohio

    Treesearch

    Ernest J. Gebhart

    1980-01-01

    Other members of this panel are going to reveal the basic statistics about the coal strip mining industry in Ohio so I will confine my remarks to the revegetation of the spoil banks. So it doesn't appear that Ohio confined its tree planting efforts to spoil banks alone, I will rely on a few statistics.

  9. Idaho State University Statistical Portrait, Academic Year 1998-1999.

    ERIC Educational Resources Information Center

    Idaho State Univ., Pocatello. Office of Institutional Research.

    This report provides basic statistical data for Idaho State University, and includes both point-of-time data as well as trend data. The information is divided into sections emphasizing students, programs, faculty and staff, finances, and physical facilities. Student data includes enrollment, geographical distribution, student/faculty ratios,…

  10. Statistical Report. Fiscal Year 1995: September 1, 1994 - August 31, 1995.

    ERIC Educational Resources Information Center

    Texas Higher Education Coordinating Board, Austin.

    This report provides statistical data on Texas public and independent higher education institutions for fiscal year 1995. An introductory section provides basic information on Texas higher education institutions, while nine major sections cover: (1) student enrollment, including 1990-94 headcount data; headcount by classification, ethnic origin,…

  11. Statistical Report. Fiscal Year 1994: September 1, 1993 - August 31, 1994.

    ERIC Educational Resources Information Center

    Texas Higher Education Coordinating Board, Austin.

    This report provides statistical data on Texas public and independent higher education institutions for fiscal year 1994. An introductory section provides basic information on Texas higher education institutions, while nine major sections cover: (1) student enrollment, including 1989-93 headcount data; headcount by classification, ethnic origin,…

  12. 29 CFR 1904.42 - Requests from the Bureau of Labor Statistics for data.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Reporting Fatality, Injury and Illness Information to the Government § 1904.42 Requests from the Bureau of Labor Statistics for data. (a) Basic requirement. If you receive a Survey of Occupational Injuries and Illnesses...

  13. 29 CFR 1904.42 - Requests from the Bureau of Labor Statistics for data.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Reporting Fatality, Injury and Illness Information to the Government § 1904.42 Requests from the Bureau of Labor Statistics for data. (a) Basic requirement. If you receive a Survey of Occupational Injuries and Illnesses...

  14. 29 CFR 1904.42 - Requests from the Bureau of Labor Statistics for data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Reporting Fatality, Injury and Illness Information to the Government § 1904.42 Requests from the Bureau of Labor Statistics for data. (a) Basic requirement. If you receive a Survey of Occupational Injuries and Illnesses...

  15. 29 CFR 1904.42 - Requests from the Bureau of Labor Statistics for data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Reporting Fatality, Injury and Illness Information to the Government § 1904.42 Requests from the Bureau of Labor Statistics for data. (a) Basic requirement. If you receive a Survey of Occupational Injuries and Illnesses...

  16. Theoretical Frameworks for Math Fact Fluency

    ERIC Educational Resources Information Center

    Arnold, Katherine

    2012-01-01

    Recent education statistics indicate persistent low math scores for our nation's students. This drop in math proficiency includes deficits in basic number sense and automaticity of math facts. The decrease has been recorded across all grade levels with the elementary levels showing the greatest loss (National Center for Education Statistics,…

  17. Basic Statistical Concepts and Methods for Earth Scientists

    USGS Publications Warehouse

    Olea, Ricardo A.

    2008-01-01

    INTRODUCTION Statistics is the science of collecting, analyzing, interpreting, modeling, and displaying masses of numerical data primarily for the characterization and understanding of incompletely known systems. Over the years, these objectives have lead to a fair amount of analytical work to achieve, substantiate, and guide descriptions and inferences.

  18. Assessment of statistical education in Indonesia: Preliminary results and initiation to simulation-based inference

    NASA Astrophysics Data System (ADS)

    Saputra, K. V. I.; Cahyadi, L.; Sembiring, U. A.

    2018-01-01

    Start in this paper, we assess our traditional elementary statistics education and also we introduce elementary statistics with simulation-based inference. To assess our statistical class, we adapt the well-known CAOS (Comprehensive Assessment of Outcomes in Statistics) test that serves as an external measure to assess the student’s basic statistical literacy. This test generally represents as an accepted measure of statistical literacy. We also introduce a new teaching method on elementary statistics class. Different from the traditional elementary statistics course, we will introduce a simulation-based inference method to conduct hypothesis testing. From the literature, it has shown that this new teaching method works very well in increasing student’s understanding of statistics.

  19. Underprotection of unpredictable statistical lives compared to predictable ones

    PubMed Central

    Evans, Nicholas G.; Cotton-Barratt, Owen

    2016-01-01

    Existing ethical discussion considers the differences in care for identified versus statistical lives. However there has been little attention to the different degrees of care that are taken for different kinds of statistical lives. Here we argue that for a given number of statistical lives at stake, there will sometimes be different, and usually greater care taken to protect predictable statistical lives, in which the number of lives that will be lost can be predicted fairly accurately, than for unpredictable statistical lives, where the lives are at stake because of a low-probability event, such that most likely no one will be affected by the decision but with low probability some lives will be at stake. One reason for this difference is the statistical challenge of estimating low probabilities, and in particular the tendency of common approaches to underestimate these probabilities. Another is the existence of rational incentives to treat unpredictable risks as if the probabilities were lower than they are. Some of these factors apply outside the pure economic context, to institutions, individuals, and governments. We argue that there is no ethical reason to treat unpredictable statistical lives differently from predictable statistical lives. Moreover, lives that are unpredictable from the perspective of an individual agent may become predictable when aggregated to the level of a societal decision. Underprotection of unpredictable statistical lives is a form of market failure that may need to be corrected by altering regulation, introducing compulsory liability insurance, or other social policies. PMID:27393181

  20. Statistical Prediction of Sea Ice Concentration over Arctic

    NASA Astrophysics Data System (ADS)

    Kim, Jongho; Jeong, Jee-Hoon; Kim, Baek-Min

    2017-04-01

    In this study, a statistical method that predict sea ice concentration (SIC) over the Arctic is developed. We first calculate the Season-reliant Empirical Orthogonal Functions (S-EOFs) of monthly Arctic SIC from Nimbus-7 SMMR and DMSP SSM/I-SSMIS Passive Microwave Data, which contain the seasonal cycles (12 months long) of dominant SIC anomaly patterns. Then, the current SIC state index is determined by projecting observed SIC anomalies for latest 12 months to the S-EOFs. Assuming the current SIC anomalies follow the spatio-temporal evolution in the S-EOFs, we project the future (upto 12 months) SIC anomalies by multiplying the SI and the corresponding S-EOF and then taking summation. The predictive skill is assessed by hindcast experiments initialized at all the months for 1980-2010. When comparing predictive skill of SIC predicted by statistical model and NCEP CFS v2, the statistical model shows a higher skill in predicting sea ice concentration and extent.

  1. First principles statistical mechanics of alloys and magnetism

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus; Khan, Suffian N.; Li, Ying Wai

    Modern high performance computing resources are enabling the exploration of the statistical physics of phase spaces with increasing size and higher fidelity of the Hamiltonian of the systems. For selected systems, this now allows the combination of Density Functional based first principles calculations with classical Monte Carlo methods for parameter free, predictive thermodynamics of materials. We combine our locally selfconsistent real space multiple scattering method for solving the Kohn-Sham equation with Wang-Landau Monte-Carlo calculations (WL-LSMS). In the past we have applied this method to the calculation of Curie temperatures in magnetic materials. Here we will present direct calculations of the chemical order - disorder transitions in alloys. We present our calculated transition temperature for the chemical ordering in CuZn and the temperature dependence of the short-range order parameter and specific heat. Finally we will present the extension of the WL-LSMS method to magnetic alloys, thus allowing the investigation of the interplay of magnetism, structure and chemical order in ferrous alloys. This research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division and it used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory.

  2. Does Parental Educational Level Predict Drop-Out from Upper Secondary School for 16- to 24-Year-Olds when Basic Skills Are Accounted For? A Cross Country Comparison

    ERIC Educational Resources Information Center

    Lundetrae, Kjersti

    2011-01-01

    Drop-out from upper secondary school is considered a widespread problem, closely connected with youth unemployment. The aim of the current study was to examine whether parents' level of education predicted drop-out for 16-24-year-olds when accounting for basic skills. For this purpose, data from the Norwegian (n = 996) and American (n = 641)…

  3. Rebuilding Government Legitimacy in Post-conflict Societies: Case Studies of Nepal and Afghanistan

    DTIC Science & Technology

    2015-09-09

    administered via the verbal scales due to reduced time spent explaining the visual show cards. Statistical results corresponded with observations from...a three-step strategy for dealing with item non-response. First, basic descriptive statistics are calculated to determine the extent of item...descriptive statistics for all items in the survey), however this section of the report highlights just some of the findings. Thus, the results

  4. Biostatistical and medical statistics graduate education

    PubMed Central

    2014-01-01

    The development of graduate education in biostatistics and medical statistics is discussed in the context of training within a medical center setting. The need for medical researchers to employ a wide variety of statistical designs in clinical, genetic, basic science and translational settings justifies the ongoing integration of biostatistical training into medical center educational settings and informs its content. The integration of large data issues are a challenge. PMID:24472088

  5. PopSc: Computing Toolkit for Basic Statistics of Molecular Population Genetics Simultaneously Implemented in Web-Based Calculator, Python and R

    PubMed Central

    Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia

    2016-01-01

    Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis. PMID:27792763

  6. PopSc: Computing Toolkit for Basic Statistics of Molecular Population Genetics Simultaneously Implemented in Web-Based Calculator, Python and R.

    PubMed

    Chen, Shi-Yi; Deng, Feilong; Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia

    2016-01-01

    Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis.

  7. Cognitive and attitudinal predictors related to graphing achievement among pre-service elementary teachers

    NASA Astrophysics Data System (ADS)

    Szyjka, Sebastian P.

    The purpose of this study was to determine the extent to which six cognitive and attitudinal variables predicted pre-service elementary teachers' performance on line graphing. Predictors included Illinois teacher education basic skills sub-component scores in reading comprehension and mathematics, logical thinking performance scores, as well as measures of attitudes toward science, mathematics and graphing. This study also determined the strength of the relationship between each prospective predictor variable and the line graphing performance variable, as well as the extent to which measures of attitude towards science, mathematics and graphing mediated relationships between scores on mathematics, reading, logical thinking and line graphing. Ninety-four pre-service elementary education teachers enrolled in two different elementary science methods courses during the spring 2009 semester at Southern Illinois University Carbondale participated in this study. Each subject completed five different instruments designed to assess science, mathematics and graphing attitudes as well as logical thinking and graphing ability. Sixty subjects provided copies of primary basic skills score reports that listed subset scores for both reading comprehension and mathematics. The remaining scores were supplied by a faculty member who had access to a database from which the scores were drawn. Seven subjects, whose scores could not be found, were eliminated from final data analysis. Confirmatory factor analysis (CFA) was conducted in order to establish validity and reliability of the Questionnaire of Attitude Toward Line Graphs in Science (QALGS) instrument. CFA tested the statistical hypothesis that the five main factor structures within the Questionnaire of Attitude Toward Statistical Graphs (QASG) would be maintained in the revised QALGS. Stepwise Regression Analysis with backward elimination was conducted in order to generate a parsimonious and precise predictive model. This procedure allowed the researcher to explore the relationships among the affective and cognitive variables that were included in the regression analysis. The results for CFA indicated that the revised QALGS measure was sound in its psychometric properties when tested against the QASG. Reliability statistics indicated that the overall reliability for the 32 items in the QALGS was .90. The learning preferences construct had the lowest reliability (.67), while enjoyment (.89), confidence (.86) and usefulness (.77) constructs had moderate to high reliabilities. The first four measurement models fit the data well as indicated by the appropriate descriptive and statistical indices. However, the fifth measurement model did not fit the data well statistically, and only fit well with two descriptive indices. The results addressing the research question indicated that mathematical and logical thinking ability were significant predictors of line graph performance among the remaining group of variables. These predictors accounted for 41% of the total variability on the line graph performance variable. Partial correlation coefficients indicated that mathematics ability accounted for 20.5% of the variance on the line graphing performance variable when removing the effect of logical thinking. The logical thinking variable accounted for 4.7% of the variance on the line graphing performance variable when removing the effect of mathematics ability.

  8. Basic needs and their predictors for intubated patients in surgical intensive care units.

    PubMed

    Liu, Jin-Jen; Chou, Fan-Hao; Yeh, Shu-Hui

    2009-01-01

    This study was conducted to investigate the basic needs and communication difficulties of intubated patients in surgical intensive care units (ICUs) and to identify predictors of the basic needs from the patient characteristics and communication difficulties. In this descriptive correlational study, 80 surgical ICU patients were recruited and interviewed using 3 structured questionnaires: demographic information, scale of basic needs, and scale of communication difficulties. The intubated patients were found to have moderate communication difficulties. The sense of being loved and belonging was the most common need in the intubated patients studied (56.00 standardized scores). A significantly positive correlation was found between communication difficulties and general level of basic needs (r = .53, P < .01), and another positive correlation was found between the length of stay in ICUs and the need for love and belonging (r = .25, P < .05). The basic needs of intubated patients could be significantly predicted by communication difficulties (P = .002), use of physical restraints (P = .010), lack of intubation history (P = .005), and lower educational level (P = .005). These 4 predictors accounted for 47% of the total variance in basic needs. The intubated patients in surgical ICUs had moderate basic needs and communication difficulties. The fact that the basic needs could be predicted by communication difficulties, physical restraints, and educational level suggests that nurses in surgical ICUs need to improve skills of communication and limit the use of physical restraints, especially in patients with a lower educational level.

  9. A comparison of general and ambulance specific stressors: predictors of job satisfaction and health problems in a nationwide one-year follow-up study of Norwegian ambulance personnel.

    PubMed

    Sterud, Tom; Hem, Erlend; Lau, Bjørn; Ekeberg, Oivind

    2011-03-31

    To address the relative importance of general job-related stressors, ambulance specific stressors and individual characteristics in relation to job satisfaction and health complaints (emotional exhaustion, psychological distress and musculoskeletal pain) among ambulance personnel. A nationwide prospective questionnaire survey of ambulance personnel in operational duty at two time points (n = 1180 at baseline, T1 and n = 298 at one-year follow up, T2). The questionnaires included the Maslach Burnout Inventory, The Job Satisfaction Scale, Hopkins Symptom Checklist (SCL-10), Job Stress Survey, the Norwegian Ambulance Stress Survey and the Basic Character Inventory. Overall, 42 out of the possible 56 correlations between job stressors at T1 and job satisfaction and health complaints at T2 were statistically significant. Lower job satisfaction at T2 was predicted by frequency of lack of leader support and severity of challenging job tasks. Emotional exhaustion at T2 was predicted by neuroticism, frequency of lack of support from leader, time pressure, and physical demands. Adjusted for T1 levels, emotional exhaustion was predicted by neuroticism (beta = 0.15, p < .05) and time pressure (beta = 0.14, p < 0.01). Psychological distress at T2 was predicted by neuroticism and lack of co-worker support. Adjusted for T1 levels, psychological distress was predicted by neuroticism (beta = 0.12, p < .05). Musculoskeletal pain at T2 was predicted by, higher age, neuroticism, lack of co-worker support and severity of physical demands. Adjusted for T1 levels, musculoskeletal pain was predicted neuroticism, and severity of physical demands (beta = 0.12, p < .05). Low job satisfaction at T2 was predicted by general work-related stressors, whereas health complaints at T2 were predicted by both general work-related stressors and ambulance specific stressors. The personality variable neuroticism predicted increased complaints across all health outcomes.

  10. Comparing multiple statistical methods for inverse prediction in nuclear forensics applications

    DOE PAGES

    Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela

    2017-10-29

    Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less

  11. Comparing multiple statistical methods for inverse prediction in nuclear forensics applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela

    Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less

  12. Does activity limitation predict discharge destination for postacute care patients?

    PubMed

    Chang, Feng-Hang; Ni, Pengsheng; Jette, Alan M

    2014-09-01

    This study aimed to examine the ability of different domains of activity limitation to predict discharge destination (home vs. nonhome settings) 1 mo after hospital discharge for postacute rehabilitation patients. A secondary analysis was conducted using a data set of 518 adults with neurologic, lower extremity orthopedic, and complex medical conditions followed after discharge from a hospital into postacute care. Variables collected at baseline include activity limitations (basic mobility, daily activity, and applied cognitive function, measured by the Activity Measure for Post-Acute Care), demographics, diagnosis, and cognitive status. The discharge destination was recorded at 1 mo after being discharged from the hospital. Correlational analyses revealed that the 1-mo discharge destination was correlated with two domains of activity (basic mobility and daily activity) and cognitive status. However, multiple logistic regression and receiver operating characteristic curve analyses showed that basic mobility functioning performed the best in discriminating home vs. nonhome living. This study supported the evidence that basic mobility functioning is a critical determinant of discharge home for postacute rehabilitation patients. The Activity Measure for Post-Acute Care-basic mobility showed good usability in discriminating home vs. nonhome living. The findings shed light on the importance of basic mobility functioning in the discharge planning process.

  13. Environmental Statistics and Optimal Regulation

    PubMed Central

    2014-01-01

    Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies–such as constitutive expression or graded response–for regulating protein levels in response to environmental inputs. We propose a general framework–here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient–to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones. PMID:25254493

  14. Basic symptoms in the general population and in psychotic and non-psychotic psychiatric adolescents.

    PubMed

    Meng, Heiner; Schimmelmann, Benno Graf; Koch, Eginhard; Bailey, Barbara; Parzer, Peter; Günter, Michael; Mohler, Beat; Kunz, Natalia; Schulte-Markwort, Michael; Felder, Wilhelm; Zollinger, Rudolf; Bürgin, Dieter; Resch, Franz

    2009-06-01

    Cognitive-perceptive 'basic symptoms' are used complementary to ultra-high-risk criteria in order to predict onset of psychosis in the pre-psychotic phase. The aim was to investigate the prevalence of a broad selection of 'basic symptoms' in a representative general adolescent population sample (GPS; N=96) and to compare it with adolescents first admitted for early onset psychosis (EOP; N=87) or non-psychotic psychiatric disorders (NP; N=137). Subjects were assessed with the Bonn Scale for the Assessment of Basic Symptoms (BSABS). Prevalence of at least one 'basic symptom' and mean numbers were compared across the three groups. Logistic regression was used to predict group membership by BSABS subscales; risk ratios were calculated to identify 'basic symptoms' which best discriminated between groups. The prevalence of at least any one 'basic symptom' was 30.2% in GPS compared to 81% in NP and 96.5% in EOP. Correct classification of EOP when compared to GPS was high (94.0%) and lower when compared to NP (78.6%). Cognitive symptoms discriminated best between EOP and NP. Alike other prodromal- and psychotic-like experiences, 'basic symptoms' are prevalent in the general adolescent population, yet at a lower rate compared to EOP and NP. The usage of 'at least one basic symptom' as a screening criterion for youth at risk of developing a psychotic disorder is not recommended in the general population or in unselected psychiatrically ill adolescents. However, particularly cognitive 'basic symptoms' may be a valuable criteria to be included in future 'at risk' studies in adolescents.

  15. Simulating boundary layer transition with low-Reynolds-number k-epsilon turbulence models. I - An evaluation of prediction characteristics. II - An approach to improving the predictions

    NASA Technical Reports Server (NTRS)

    Schmidt, R. C.; Patankar, S. V.

    1991-01-01

    The capability of two k-epsilon low-Reynolds number (LRN) turbulence models, those of Jones and Launder (1972) and Lam and Bremhorst (1981), to predict transition in external boundary-layer flows subject to free-stream turbulence is analyzed. Both models correctly predict the basic qualitative aspects of boundary-layer transition with free stream turbulence, but for calculations started at low values of certain defined Reynolds numbers, the transition is generally predicted at unrealistically early locations. Also, the methods predict transition lengths significantly shorter than those found experimentally. An approach to overcoming these deficiencies without abandoning the basic LRN k-epsilon framework is developed. This approach limits the production term in the turbulent kinetic energy equation and is based on a simple stability criterion. It is correlated to the free-stream turbulence value. The modification is shown to improve the qualitative and quantitative characteristics of the transition predictions.

  16. Views of medical students: what, when and how do they want statistics taught?

    PubMed

    Fielding, S; Poobalan, A; Prescott, G J; Marais, D; Aucott, L

    2015-11-01

    A key skill for a practising clinician is being able to do research, understand the statistical analyses and interpret results in the medical literature. Basic statistics has become essential within medical education, but when, what and in which format is uncertain. To inform curriculum design/development we undertook a quantitative survey of fifth year medical students and followed them up with a series of focus groups to obtain their opinions as to what statistics teaching they want, when and how. A total of 145 students undertook the survey and five focus groups were held with between 3 and 9 participants each. Previous statistical training varied and students recognised their knowledge was inadequate and keen to see additional training implemented. Students were aware of the importance of statistics to their future careers, but apprehensive about learning. Face-to-face teaching supported by online resources was popular. Focus groups indicated the need for statistical training early in their degree and highlighted their lack of confidence and inconsistencies in support. The study found that the students see the importance of statistics training in the medical curriculum but that timing and mode of delivery are key. The findings have informed the design of a new course to be implemented in the third undergraduate year. Teaching will be based around published studies aiming to equip students with the basics required with additional resources available through a virtual learning environment. © The Author(s) 2015.

  17. A generalized concept for cost-effective structural design. [Statistical Decision Theory applied to aerospace systems

    NASA Technical Reports Server (NTRS)

    Thomas, J. M.; Hawk, J. D.

    1975-01-01

    A generalized concept for cost-effective structural design is introduced. It is assumed that decisions affecting the cost effectiveness of aerospace structures fall into three basic categories: design, verification, and operation. Within these basic categories, certain decisions concerning items such as design configuration, safety factors, testing methods, and operational constraints are to be made. All or some of the variables affecting these decisions may be treated probabilistically. Bayesian statistical decision theory is used as the tool for determining the cost optimum decisions. A special case of the general problem is derived herein, and some very useful parametric curves are developed and applied to several sample structures.

  18. On a Quantum Model of Brain Activities

    NASA Astrophysics Data System (ADS)

    Fichtner, K.-H.; Fichtner, L.; Freudenberg, W.; Ohya, M.

    2010-01-01

    One of the main activities of the brain is the recognition of signals. A first attempt to explain the process of recognition in terms of quantum statistics was given in [6]. Subsequently, details of the mathematical model were presented in a (still incomplete) series of papers (cf. [7, 2, 5, 10]). In the present note we want to give a general view of the principal ideas of this approach. We will introduce the basic spaces and justify the choice of spaces and operations. Further, we bring the model face to face with basic postulates any statistical model of the recognition process should fulfill. These postulates are in accordance with the opinion widely accepted in psychology and neurology.

  19. Validation of CRASH Model in Prediction of 14-day Mortality and 6-month Unfavorable Outcome of Head Trauma Patients.

    PubMed

    Hashemi, Behrooz; Amanat, Mahnaz; Baratloo, Alireza; Forouzanfar, Mohammad Mehdi; Rahmati, Farhad; Motamedi, Maryam; Safari, Saeed

    2016-11-01

    To date, many prognostic models have been proposed to predict the outcome of patients with traumatic brain injuries. External validation of these models in different populations is of great importance for their generalization. The present study was designed, aiming to determine the value of CRASH prognostic model in prediction of 14-day mortality (14-DM) and 6-month unfavorable outcome (6-MUO) of patients with traumatic brain injury. In the present prospective diagnostic test study, calibration and discrimination of CRASH model were evaluated in head trauma patients referred to the emergency department. Variables required for calculating CRASH expected risks (ER), and observed 14-DM and 6-MUO were gathered. Then ER of 14-DM and 6-MUO were calculated. The patients were followed for 6 months and their 14-DM and 6-MUO were recorded. Finally, the correlation of CRASH ER and the observed outcome of the patients was evaluated. The data were analyzed using STATA version 11.0. In this study, 323 patients with the mean age of 34.0 ± 19.4 years were evaluated (87.3% male). Calibration of the basic and CT models in prediction of 14-day and 6-month outcome were in the desirable range (P < 0.05). Area under the curve in the basic model for prediction of 14-DM and 6-MUO were 0.92 (95% CI: 0.89-0.96) and 0.92 (95% CI: 0.90-0.95), respectively. In addition, area under the curve in the CT model for prediction of 14-DM and 6-MUO were 0.93 (95% CI: 0.91-0.97) and 0.93 (95% CI: 0.91-0.96), respectively. There was no significant difference between the discriminations of the two models in prediction of 14-DM (p = 0.11) and 6-MUO (p = 0.1). The results of the present study showed that CRASH prediction model has proper discrimination and calibration in predicting 14-DM and 6-MUO of head trauma patients. Since there was no difference between the values of the basic and CT models, using the basic model is recommended to simplify the risk calculations.

  20. Applications of machine learning in cancer prediction and prognosis.

    PubMed

    Cruz, Joseph A; Wishart, David S

    2007-02-11

    Machine learning is a branch of artificial intelligence that employs a variety of statistical, probabilistic and optimization techniques that allows computers to "learn" from past examples and to detect hard-to-discern patterns from large, noisy or complex data sets. This capability is particularly well-suited to medical applications, especially those that depend on complex proteomic and genomic measurements. As a result, machine learning is frequently used in cancer diagnosis and detection. More recently machine learning has been applied to cancer prognosis and prediction. This latter approach is particularly interesting as it is part of a growing trend towards personalized, predictive medicine. In assembling this review we conducted a broad survey of the different types of machine learning methods being used, the types of data being integrated and the performance of these methods in cancer prediction and prognosis. A number of trends are noted, including a growing dependence on protein biomarkers and microarray data, a strong bias towards applications in prostate and breast cancer, and a heavy reliance on "older" technologies such artificial neural networks (ANNs) instead of more recently developed or more easily interpretable machine learning methods. A number of published studies also appear to lack an appropriate level of validation or testing. Among the better designed and validated studies it is clear that machine learning methods can be used to substantially (15-25%) improve the accuracy of predicting cancer susceptibility, recurrence and mortality. At a more fundamental level, it is also evident that machine learning is also helping to improve our basic understanding of cancer development and progression.

  1. Applications of Machine Learning in Cancer Prediction and Prognosis

    PubMed Central

    Cruz, Joseph A.; Wishart, David S.

    2006-01-01

    Machine learning is a branch of artificial intelligence that employs a variety of statistical, probabilistic and optimization techniques that allows computers to “learn” from past examples and to detect hard-to-discern patterns from large, noisy or complex data sets. This capability is particularly well-suited to medical applications, especially those that depend on complex proteomic and genomic measurements. As a result, machine learning is frequently used in cancer diagnosis and detection. More recently machine learning has been applied to cancer prognosis and prediction. This latter approach is particularly interesting as it is part of a growing trend towards personalized, predictive medicine. In assembling this review we conducted a broad survey of the different types of machine learning methods being used, the types of data being integrated and the performance of these methods in cancer prediction and prognosis. A number of trends are noted, including a growing dependence on protein biomarkers and microarray data, a strong bias towards applications in prostate and breast cancer, and a heavy reliance on “older” technologies such artificial neural networks (ANNs) instead of more recently developed or more easily interpretable machine learning methods. A number of published studies also appear to lack an appropriate level of validation or testing. Among the better designed and validated studies it is clear that machine learning methods can be used to substantially (15–25%) improve the accuracy of predicting cancer susceptibility, recurrence and mortality. At a more fundamental level, it is also evident that machine learning is also helping to improve our basic understanding of cancer development and progression. PMID:19458758

  2. Predictors and Patterns of Local, Regional, and Distant Failure in Squamous Cell Carcinoma of the Vulva.

    PubMed

    Bogani, Giorgio; Cromi, Antonella; Serati, Maurizio; Uccella, Stefano; Donato, Violante Di; Casarin, Jvan; Naro, Edoardo Di; Ghezzi, Fabio

    2017-06-01

    To identify factors predicting for recurrence in vulvar cancer patients undergoing surgical treatment. We retrospectively evaluated data of consecutive patients with squamous cell vulvar cancer treated between January 1, 1990 and December 31, 2013. Basic descriptive statistics and multivariable analysis were used to design predicting models influencing outcomes. Five-year disease-free survival (DFS) and overall survival (OS) were analyzed using the Cox model. The study included 101 patients affected by vulvar cancer: 64 (63%) stage I, 12 (12%) stage II, 20 (20%) stage III, and 5 (5%) stage IV. After a mean (SD) follow-up of 37.6 (22.1) months, 21 (21%) recurrences occurred. Local, regional, and distant failures were recorded in 14 (14%), 6 (6%), and 3 (3%) patients, respectively. Five-year DFS and OS were 77% and 82%, respectively. At multivariate analysis only stromal invasion >2 mm (hazard ratio: 4.9 [95% confidence interval, 1.17-21.1]; P=0.04) and extracapsular lymph node involvement (hazard ratio: 9.0 (95% confidence interval, 1.17-69.5); P=0.03) correlated with worse DFS, although no factor independently correlated with OS. Looking at factors influencing local and regional failure, we observed that stromal invasion >2 mm was the only factor predicting for local recurrence, whereas lymph node extracapsular involvement predicted for regional recurrence. Stromal invasion >2 mm and lymph node extracapsular spread are the most important factors predicting for local and regional failure, respectively. Studies evaluating the effectiveness of adjuvant treatment in high-risk patients are warranted.

  3. Validating neural-network refinements of nuclear mass models

    NASA Astrophysics Data System (ADS)

    Utama, R.; Piekarewicz, J.

    2018-01-01

    Background: Nuclear astrophysics centers on the role of nuclear physics in the cosmos. In particular, nuclear masses at the limits of stability are critical in the development of stellar structure and the origin of the elements. Purpose: We aim to test and validate the predictions of recently refined nuclear mass models against the newly published AME2016 compilation. Methods: The basic paradigm underlining the recently refined nuclear mass models is based on existing state-of-the-art models that are subsequently refined through the training of an artificial neural network. Bayesian inference is used to determine the parameters of the neural network so that statistical uncertainties are provided for all model predictions. Results: We observe a significant improvement in the Bayesian neural network (BNN) predictions relative to the corresponding "bare" models when compared to the nearly 50 new masses reported in the AME2016 compilation. Further, AME2016 estimates for the handful of impactful isotopes in the determination of r -process abundances are found to be in fairly good agreement with our theoretical predictions. Indeed, the BNN-improved Duflo-Zuker model predicts a root-mean-square deviation relative to experiment of σrms≃400 keV. Conclusions: Given the excellent performance of the BNN refinement in confronting the recently published AME2016 compilation, we are confident of its critical role in our quest for mass models of the highest quality. Moreover, as uncertainty quantification is at the core of the BNN approach, the improved mass models are in a unique position to identify those nuclei that will have the strongest impact in resolving some of the outstanding questions in nuclear astrophysics.

  4. How do energetic ions damage metallic surfaces?

    DOE PAGES

    Osetskiy, Yury N.; Calder, Andrew F.; Stoller, Roger E.

    2015-02-20

    Surface modification under bombardment by energetic ions observed under different conditions in structural and functional materials and can be either unavoidable effect of the conditions or targeted modification to enhance materials properties. Understanding basic mechanisms is necessary for predicting properties changes. The mechanisms activated during ion irradiation are of atomic scale and atomic scale modeling is the most suitable tool to study these processes. In this paper we present results of an extensive simulation program aimed at developing an understanding of primary surface damage in iron by energetic particles. We simulated 25 keV self-ion bombardment of Fe thin films withmore » (100) and (110) surfaces at room temperature. A large number of simulations, ~400, were carried out allow a statistically significant treatment of the results. The particular mechanism of surface damage depends on how the destructive supersonic shock wave generated by the displacement cascade interacts with the free surface. Three basic scenarios were observed, with the limiting cases being damage created far below the surface with little or no impact on the surface itself, and extensive direct surface damage on the timescale of a few picoseconds. In some instances, formation of large <100> vacancy loops beneath the free surface was observed, which may explain some earlier experimental observations.« less

  5. A comprehensive guide to the Argentinian case-bearer beetle fauna (Coleoptera, Chrysomelidae, Camptosomata)

    PubMed Central

    Agrain, Federico A.; Chamorro, Maria Lourdes; Cabrera, Nora; Sassi, Davide; Roig-Juñent, Sergio

    2017-01-01

    Abstract Knowledge of Argentinian Camptosomata has largely remained static for the last 60 years since the last publication by Francisco de Asis Monrós in the 1950’s. One hundred and ninety Camptosomata species (182 Cryptocephalinae and 8 Lamprosomatinae) in 31 genera are recorded herein from Argentina. Illustrated diagnostic keys to the subfamilies, tribes, subtribes and genera of Argentinian Camptosomata, plus species checklists and illustrations for all genera of camptosomatan beetles cited for each political region of Argentina are provided. General notes on the taxonomy and distribution, as well as basic statistics, are also included. This study provides basic information about the Camptosomata fauna in Argentina that will facilitate in the accurate generic-level identification of this group and aid subsequent taxonomic revisions, and phylogenetic, ecological, and biogeographic studies. This information will also facilitate faunistic comparisons between neighboring countries. Two nomenclatural acts are proposed: Temnodachrys (Temnodachrys) argentina (Guérin, 1952), comb. n., and Metallactus bivitticollis (Jacoby, 1907), comb. n. The following are new records for Argentina: Stegnocephala xanthopyga (Suffrian, 1863) and Lamprosoma azureum Germar, 1824. Currently, the most diverse camptosomate tribe in Argentina is Clytrini, with almost twice the number of species of Cryptocephalini. New records for Argentina are predicted. PMID:28769688

  6. Innovations in curriculum design: A multi-disciplinary approach to teaching statistics to undergraduate medical students

    PubMed Central

    Freeman, Jenny V; Collier, Steve; Staniforth, David; Smith, Kevin J

    2008-01-01

    Background Statistics is relevant to students and practitioners in medicine and health sciences and is increasingly taught as part of the medical curriculum. However, it is common for students to dislike and under-perform in statistics. We sought to address these issues by redesigning the way that statistics is taught. Methods The project brought together a statistician, clinician and educational experts to re-conceptualize the syllabus, and focused on developing different methods of delivery. New teaching materials, including videos, animations and contextualized workbooks were designed and produced, placing greater emphasis on applying statistics and interpreting data. Results Two cohorts of students were evaluated, one with old style and one with new style teaching. Both were similar with respect to age, gender and previous level of statistics. Students who were taught using the new approach could better define the key concepts of p-value and confidence interval (p < 0.001 for both). They were more likely to regard statistics as integral to medical practice (p = 0.03), and to expect to use it in their medical career (p = 0.003). There was no significant difference in the numbers who thought that statistics was essential to understand the literature (p = 0.28) and those who felt comfortable with the basics of statistics (p = 0.06). More than half the students in both cohorts felt that they were comfortable with the basics of medical statistics. Conclusion Using a variety of media, and placing emphasis on interpretation can help make teaching, learning and understanding of statistics more people-centred and relevant, resulting in better outcomes for students. PMID:18452599

  7. A Discussion of the Measurement and Statistical Manipulation of Selected Key Variables in an Adult Basic Education Program.

    ERIC Educational Resources Information Center

    Cunningham, Phyllis M.

    Intending to explore the interaction effects of self-esteem level and perceived program utility on the retention and cognitive achievement of adult basic education students, a self-esteem instrument, to be administered verbally, was constructed with content relevant items developed from and tested on a working class, undereducated, black, adult…

  8. Assessment of Current Jet Noise Prediction Capabilities

    NASA Technical Reports Server (NTRS)

    Hunter, Craid A.; Bridges, James E.; Khavaran, Abbas

    2008-01-01

    An assessment was made of the capability of jet noise prediction codes over a broad range of jet flows, with the objective of quantifying current capabilities and identifying areas requiring future research investment. Three separate codes in NASA s possession, representative of two classes of jet noise prediction codes, were evaluated, one empirical and two statistical. The empirical code is the Stone Jet Noise Module (ST2JET) contained within the ANOPP aircraft noise prediction code. It is well documented, and represents the state of the art in semi-empirical acoustic prediction codes where virtual sources are attributed to various aspects of noise generation in each jet. These sources, in combination, predict the spectral directivity of a jet plume. A total of 258 jet noise cases were examined on the ST2JET code, each run requiring only fractions of a second to complete. Two statistical jet noise prediction codes were also evaluated, JeNo v1, and Jet3D. Fewer cases were run for the statistical prediction methods because they require substantially more resources, typically a Reynolds-Averaged Navier-Stokes solution of the jet, volume integration of the source statistical models over the entire plume, and a numerical solution of the governing propagation equation within the jet. In the evaluation process, substantial justification of experimental datasets used in the evaluations was made. In the end, none of the current codes can predict jet noise within experimental uncertainty. The empirical code came within 2dB on a 1/3 octave spectral basis for a wide range of flows. The statistical code Jet3D was within experimental uncertainty at broadside angles for hot supersonic jets, but errors in peak frequency and amplitude put it out of experimental uncertainty at cooler, lower speed conditions. Jet3D did not predict changes in directivity in the downstream angles. The statistical code JeNo,v1 was within experimental uncertainty predicting noise from cold subsonic jets at all angles, but did not predict changes with heating of the jet and did not account for directivity changes at supersonic conditions. Shortcomings addressed here give direction for future work relevant to the statistical-based prediction methods. A full report will be released as a chapter in a NASA publication assessing the state of the art in aircraft noise prediction.

  9. Summary Statistics of CPB-Qualified Public Radio Stations: Fiscal Year 1971.

    ERIC Educational Resources Information Center

    Lee, S. Young; Pedone, Ronald J.

    Basic statistics on finance, employment, and broadcast and production activities of 103 Corporation for Public Broadcasting (CPB)--qualified radio stations in the United States and Puerto Rico for Fiscal Year 1971 are collected. The first section of the report deals with total funds, income, direct operating costs, capital expenditures, and other…

  10. Using Statistics to Lie, Distort, and Abuse Data

    ERIC Educational Resources Information Center

    Bintz, William; Moore, Sara; Adams, Cheryll; Pierce, Rebecca

    2009-01-01

    Statistics is a branch of mathematics that involves organization, presentation, and interpretation of data, both quantitative and qualitative. Data do not lie, but people do. On the surface, quantitative data are basically inanimate objects, nothing more than lifeless and meaningless symbols that appear on a page, calculator, computer, or in one's…

  11. What Software to Use in the Teaching of Mathematical Subjects?

    ERIC Educational Resources Information Center

    Berežný, Štefan

    2015-01-01

    We can consider two basic views, when using mathematical software in the teaching of mathematical subjects. First: How to learn to use specific software for the specific tasks, e. g., software Statistica for the subjects of Applied statistics, probability and mathematical statistics, or financial mathematics. Second: How to learn to use the…

  12. Intrex Subject/Title Inverted-File Characteristics.

    ERIC Educational Resources Information Center

    Uemura, Syunsuke

    The characteristics of the Intrex subject/title inverted file are analyzed. Basic statistics of the inverted file are presented including various distributions of the index words and terms from which the file was derived, and statistics on stems, the file growth process, and redundancy measurements. A study of stems both with extremely high and…

  13. The Robustness of the Studentized Range Statistic to Violations of the Normality and Homogeneity of Variance Assumptions.

    ERIC Educational Resources Information Center

    Ramseyer, Gary C.; Tcheng, Tse-Kia

    The present study was directed at determining the extent to which the Type I Error rate is affected by violations in the basic assumptions of the q statistic. Monte Carlo methods were employed, and a variety of departures from the assumptions were examined. (Author)

  14. Application of an Online Reference for Reviewing Basic Statistical Principles of Operating Room Management

    ERIC Educational Resources Information Center

    Dexter, Franklin; Masursky, Danielle; Wachtel, Ruth E.; Nussmeier, Nancy A.

    2010-01-01

    Operating room (OR) management differs from clinical anesthesia in that statistical literacy is needed daily to make good decisions. Two of the authors teach a course in operations research for surgical services to anesthesiologists, anesthesia residents, OR nursing directors, hospital administration students, and analysts to provide them with the…

  15. Statistics and Data Interpretation for Social Work

    ERIC Educational Resources Information Center

    Rosenthal, James A.

    2011-01-01

    Written by a social worker for social work students, this is a nuts and bolts guide to statistics that presents complex calculations and concepts in clear, easy-to-understand language. It includes numerous examples, data sets, and issues that students will encounter in social work practice. The first section introduces basic concepts and terms to…

  16. Using Excel in Teacher Education for Sustainability

    ERIC Educational Resources Information Center

    Aydin, Serhat

    2016-01-01

    In this study, the feasibility of using Excel software in teaching whole Basic Statistics Course and its influence on the attitudes of pre-service science teachers towards statistics were investigated. One hundred and two pre-service science teachers in their second year participated in the study. The data were collected from the prospective…

  17. Basic Math Skills and Performance in an Introductory Statistics Course

    ERIC Educational Resources Information Center

    Johnson, Marianne; Kuennen, Eric

    2006-01-01

    We identify the student characteristics most associated with success in an introductory business statistics class, placing special focus on the relationship between student math skills and course performance, as measured by student grade in the course. To determine which math skills are important for student success, we examine (1) whether the…

  18. An Online Course of Business Statistics: The Proportion of Successful Students

    ERIC Educational Resources Information Center

    Pena-Sanchez, Rolando

    2009-01-01

    This article describes the students' academic progress in an online course of business statistics through interactive software assignments and diverse educational homework, which helps these students to build their own e-learning through basic competences; i.e. interpreting results and solving problems. Cross-tables were built for the categorical…

  19. Seismic activity prediction using computational intelligence techniques in northern Pakistan

    NASA Astrophysics Data System (ADS)

    Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat

    2017-10-01

    Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.

  20. Statistical Learning of Probabilistic Nonadjacent Dependencies by Multiple-Cue Integration

    ERIC Educational Resources Information Center

    van den Bos, Esther; Christiansen, Morten H.; Misyak, Jennifer B.

    2012-01-01

    Previous studies have indicated that dependencies between nonadjacent elements can be acquired by statistical learning when each element predicts only one other element (deterministic dependencies). The present study investigates statistical learning of probabilistic nonadjacent dependencies, in which each element predicts several other elements…

  1. Geoscience in the Big Data Era: Are models obsolete?

    NASA Astrophysics Data System (ADS)

    Yuen, D. A.; Zheng, L.; Stark, P. B.; Morra, G.; Knepley, M.; Wang, X.

    2016-12-01

    In last few decades, the velocity, volume, and variety of geophysical data have increased, while the development of the Internet and distributed computing has led to the emergence of "data science." Fitting and running numerical models, especially based on PDEs, is the main consumer of flops in geoscience. Can large amounts of diverse data supplant modeling? Without the ability to conduct randomized, controlled experiments, causal inference requires understanding the physics. It is sometimes possible to predict well without understanding the system—if (1) the system is predictable, (2) data on "important" variables are available, and (3) the system changes slowly enough. And sometimes even a crude model can help the data "speak for themselves" much more clearly. For example, Shearer (1991) used a 1-dimensional velocity model to stack long-period seismograms, revealing upper mantle discontinuities. This was a "big data" approach: the main use of computing was in the data processing, rather than in modeling, yet the "signal" became clear. In contrast, modelers tend to use all available computing power to fit even more complex models, resulting in a cycle where uncertainty quantification (UQ) is never possible: even if realistic UQ required only 1,000 model evaluations, it is never in reach. To make more reliable inferences requires better data analysis and statistics, not more complex models. Geoscientists need to learn new skills and tools: sound software engineering practices; open programming languages suitable for big data; parallel and distributed computing; data visualization; and basic nonparametric, computationally based statistical inference, such as permutation tests. They should work reproducibly, scripting all analyses and avoiding point-and-click tools.

  2. Hypothesis Testing as an Act of Rationality

    NASA Astrophysics Data System (ADS)

    Nearing, Grey

    2017-04-01

    Statistical hypothesis testing is ad hoc in two ways. First, setting probabilistic rejection criteria is, as Neyman (1957) put it, an act of will rather than an act of rationality. Second, physical theories like conservation laws do not inherently admit probabilistic predictions, and so we must use what are called epistemic bridge principles to connect model predictions with the actual methods of hypothesis testing. In practice, these bridge principles are likelihood functions, error functions, or performance metrics. I propose that the reason we are faced with these problems is because we have historically failed to account for a fundamental component of basic logic - namely the portion of logic that explains how epistemic states evolve in the presence of empirical data. This component of Cox' (1946) calculitic logic is called information theory (Knuth, 2005), and adding information theory our hypothetico-deductive account of science yields straightforward solutions to both of the above problems. This also yields a straightforward method for dealing with Popper's (1963) problem of verisimilitude by facilitating a quantitative approach to measuring process isomorphism. In practice, this involves data assimilation. Finally, information theory allows us to reliably bound measures of epistemic uncertainty, thereby avoiding the problem of Bayesian incoherency under misspecified priors (Grünwald, 2006). I therefore propose solutions to four of the fundamental problems inherent in both hypothetico-deductive and/or Bayesian hypothesis testing. - Neyman (1957) Inductive Behavior as a Basic Concept of Philosophy of Science. - Cox (1946) Probability, Frequency and Reasonable Expectation. - Knuth (2005) Lattice Duality: The Origin of Probability and Entropy. - Grünwald (2006). Bayesian Inconsistency under Misspecification. - Popper (1963) Conjectures and Refutations: The Growth of Scientific Knowledge.

  3. Solubility prediction, solvate and cocrystal screening as tools for rational crystal engineering.

    PubMed

    Loschen, Christoph; Klamt, Andreas

    2015-06-01

    The fact that novel drug candidates are becoming increasingly insoluble is a major problem of current drug development. Computational tools may address this issue by screening for suitable solvents or by identifying potential novel cocrystal formers that increase bioavailability. In contrast to other more specialized methods, the fluid phase thermodynamics approach COSMO-RS (conductor-like screening model for real solvents) allows for a comprehensive treatment of drug solubility, solvate and cocrystal formation and many other thermodynamics properties in liquids. This article gives an overview of recent COSMO-RS developments that are of interest for drug development and contains several new application examples for solubility prediction and solvate/cocrystal screening. For all property predictions COSMO-RS has been used. The basic concept of COSMO-RS consists of using the screening charge density as computed from first principles calculations in combination with fast statistical thermodynamics to compute the chemical potential of a compound in solution. The fast and accurate assessment of drug solubility and the identification of suitable solvents, solvate or cocrystal formers is nowadays possible and may be used to complement modern drug development. Efficiency is increased by avoiding costly quantum-chemical computations using a database of previously computed molecular fragments. COSMO-RS theory can be applied to a range of physico-chemical properties, which are of interest in rational crystal engineering. Most notably, in combination with experimental reference data, accurate quantitative solubility predictions in any solvent or solvent mixture are possible. Additionally, COSMO-RS can be extended to the prediction of cocrystal formation, which results in considerable predictive accuracy concerning coformer screening. In a recent variant costly quantum chemical calculations are avoided resulting in a significant speed-up and ease-of-use. © 2015 Royal Pharmaceutical Society.

  4. Metabolomics and Type 2 Diabetes: Translating Basic Research into Clinical Application.

    PubMed

    Klein, Matthias S; Shearer, Jane

    2016-01-01

    Type 2 diabetes (T2D) and its comorbidities have reached epidemic proportions, with more than half a billion cases expected by 2030. Metabolomics is a fairly new approach for studying metabolic changes connected to disease development and progression and for finding predictive biomarkers to enable early interventions, which are most effective against T2D and its comorbidities. In metabolomics, the abundance of a comprehensive set of small biomolecules (metabolites) is measured, thus giving insight into disease-related metabolic alterations. This review shall give an overview of basic metabolomics methods and will highlight current metabolomics research successes in the prediction and diagnosis of T2D. We summarized key metabolites changing in response to T2D. Despite large variations in predictive biomarkers, many studies have replicated elevated plasma levels of branched-chain amino acids and their derivatives, aromatic amino acids and α-hydroxybutyrate ahead of T2D manifestation. In contrast, glycine levels and lysophosphatidylcholine C18:2 are depressed in both predictive studies and with overt disease. The use of metabolomics for predicting T2D comorbidities is gaining momentum, as are our approaches for translating basic metabolomics research into clinical applications. As a result, metabolomics has the potential to enable informed decision-making in the realm of personalized medicine.

  5. Metabolomics and Type 2 Diabetes: Translating Basic Research into Clinical Application

    PubMed Central

    Klein, Matthias S.; Shearer, Jane

    2016-01-01

    Type 2 diabetes (T2D) and its comorbidities have reached epidemic proportions, with more than half a billion cases expected by 2030. Metabolomics is a fairly new approach for studying metabolic changes connected to disease development and progression and for finding predictive biomarkers to enable early interventions, which are most effective against T2D and its comorbidities. In metabolomics, the abundance of a comprehensive set of small biomolecules (metabolites) is measured, thus giving insight into disease-related metabolic alterations. This review shall give an overview of basic metabolomics methods and will highlight current metabolomics research successes in the prediction and diagnosis of T2D. We summarized key metabolites changing in response to T2D. Despite large variations in predictive biomarkers, many studies have replicated elevated plasma levels of branched-chain amino acids and their derivatives, aromatic amino acids and α-hydroxybutyrate ahead of T2D manifestation. In contrast, glycine levels and lysophosphatidylcholine C18:2 are depressed in both predictive studies and with overt disease. The use of metabolomics for predicting T2D comorbidities is gaining momentum, as are our approaches for translating basic metabolomics research into clinical applications. As a result, metabolomics has the potential to enable informed decision-making in the realm of personalized medicine. PMID:26636104

  6. When Preferences Are in the Way: Children's Predictions of Goal-Directed Behaviors.

    PubMed

    Yang, Fan; Frye, Douglas

    2017-12-18

    Across three studies, we examined 4- to 7-year-olds' predictions of goal-directed behaviors when goals conflict with preferences. In Study 1, when presented with stories in which a character had to act against basic preferences to achieve an interpersonal goal (e.g., playing with a partner), 6- and 7-year-olds were more likely than 4- and 5-year-olds to predict the actor would act in accordance with the goal to play with the partner, instead of fulfilling the basic preference of playing a favored activity. Similar results were obtained in Study 2 with scenarios that each involved a single individual pursuing intrapersonal goals that conflicted with his or her basic preferences. In Study 3, younger children's predictions of goal-directed behaviors did not increase for novel goals and preferences, when the influences of their own preferences, future thinking, or a lack of impulse control were minimized. The results suggest that between ages 4 and 7, children increasingly integrate and give more weight to other sources of motivational information (e.g., goals) in addition to preferences when predicting people's behaviors. This increasing awareness may have implications for children's self-regulatory and goal pursuit behaviors. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Knowledge, attitude and anxiety pertaining to basic life support and medical emergencies among dental interns in Mangalore City, India.

    PubMed

    Somaraj, Vinej; Shenoy, Rekha P; Panchmal, Ganesh Shenoy; Jodalli, Praveen S; Sonde, Laxminarayan; Karkal, Ravichandra

    2017-01-01

    This cross-sectional study aimed to assess the knowledge, attitude and anxiety pertaining to basic life support (BLS) and medical emergencies among interns in dental colleges of Mangalore city, Karnataka, India. The study subjects comprised of interns who volunteered from the four dental colleges. The knowledge and attitude of interns were assessed using a 30-item questionnaire prepared based on the Basic Life Support Manual from American Heart Association and the anxiety of interns pertaining to BLS and medical emergencies were assessed using a State-Trait Anxiety Inventory (STAI) Questionnaire. Chi-square test was performed on SPSS 21.0 (IBM Statistics, 2012) to determine statistically significant differences ( P <0.05) between assessed knowledge and anxiety. Out of 183 interns, 39.89% had below average knowledge. A total of 123 (67.21%) reported unavailability of professional training. The majority (180, 98.36%) felt the urgent need of training in basic life support procedures. Assessment of stress showed a total of 27.1% participants to be above high-stress level. Comparison of assessed knowledge and stress was found to be insignificant ( P =0.983). There was an evident lack of knowledge pertaining to the management of medical emergencies among the interns. As oral health care providers moving out to the society, a focus should be placed on the training of dental interns with respect to Basic Life Support procedures.

  8. Are emergency medical technician-basics able to use a selective immobilization of the cervical spine protocol?: a preliminary report.

    PubMed

    Dunn, Thomas M; Dalton, Alice; Dorfman, Todd; Dunn, William W

    2004-01-01

    To be a first step in determining whether emergency medicine technician (EMT)-Basics are capable of using a protocol that allows for selective immobilization of the cervical spine. Such protocols are coming into use at an advanced life support level and could be beneficial when used by basic life support providers. A convenience sample of participants (n=95) from 11 emergency medical services agencies and one college class participated in the study. All participants evaluated six patients in written scenarios and decided which should be placed into spinal precautions according to a selective spinal immobilization protocol. Systems without an existing selective spinal immobilization protocol received a one-hour continuing education lecture regarding the topic. College students received a similar lecture written so laypersons could understand the protocol. All participants showed proficiency when applying a selective immobilization protocol to patients in paper-based scenarios. Furthermore, EMT-Basics performed at the same level as paramedics when following the protocol. Statistical analysis revealed no significant differences between EMT-Basics and paramedics. A follow-up group of college students (added to have a non-EMS comparison group) also performed as well as paramedics when making decisions to use spinal precautions. Differences between college students and paramedics were also statistically insignificant. The results suggest that EMT-Basics are as accurate as paramedics when making decisions regarding selective immobilization of the cervical spine during paper-based scenarios. That laypersons are also proficient when using the protocol could indicate that it is extremely simple to follow. This study is a first step toward the necessary additional studies evaluating the efficacy of EMT-Basics using selective immobilization as a regular practice.

  9. Population activity statistics dissect subthreshold and spiking variability in V1.

    PubMed

    Bányai, Mihály; Koman, Zsombor; Orbán, Gergő

    2017-07-01

    Response variability, as measured by fluctuating responses upon repeated performance of trials, is a major component of neural responses, and its characterization is key to interpret high dimensional population recordings. Response variability and covariability display predictable changes upon changes in stimulus and cognitive or behavioral state, providing an opportunity to test the predictive power of models of neural variability. Still, there is little agreement on which model to use as a building block for population-level analyses, and models of variability are often treated as a subject of choice. We investigate two competing models, the doubly stochastic Poisson (DSP) model assuming stochasticity at spike generation, and the rectified Gaussian (RG) model tracing variability back to membrane potential variance, to analyze stimulus-dependent modulation of both single-neuron and pairwise response statistics. Using a pair of model neurons, we demonstrate that the two models predict similar single-cell statistics. However, DSP and RG models have contradicting predictions on the joint statistics of spiking responses. To test the models against data, we build a population model to simulate stimulus change-related modulations in pairwise response statistics. We use single-unit data from the primary visual cortex (V1) of monkeys to show that while model predictions for variance are qualitatively similar to experimental data, only the RG model's predictions are compatible with joint statistics. These results suggest that models using Poisson-like variability might fail to capture important properties of response statistics. We argue that membrane potential-level modeling of stochasticity provides an efficient strategy to model correlations. NEW & NOTEWORTHY Neural variability and covariability are puzzling aspects of cortical computations. For efficient decoding and prediction, models of information encoding in neural populations hinge on an appropriate model of variability. Our work shows that stimulus-dependent changes in pairwise but not in single-cell statistics can differentiate between two widely used models of neuronal variability. Contrasting model predictions with neuronal data provides hints on the noise sources in spiking and provides constraints on statistical models of population activity. Copyright © 2017 the American Physiological Society.

  10. Comparison of Neural Network and Linear Regression Models in Statistically Predicting Mental and Physical Health Status of Breast Cancer Survivors

    DTIC Science & Technology

    2015-07-15

    Long-term effects on cancer survivors’ quality of life of physical training versus physical training combined with cognitive-behavioral therapy ...COMPARISON OF NEURAL NETWORK AND LINEAR REGRESSION MODELS IN STATISTICALLY PREDICTING MENTAL AND PHYSICAL HEALTH STATUS OF BREAST...34Comparison of Neural Network and Linear Regression Models in Statistically Predicting Mental and Physical Health Status of Breast Cancer Survivors

  11. Extreme alien light allows survival of terrestrial bacteria

    NASA Astrophysics Data System (ADS)

    Johnson, Neil; Zhao, Guannan; Caycedo, Felipe; Manrique, Pedro; Qi, Hong; Rodriguez, Ferney; Quiroga, Luis

    2013-07-01

    Photosynthetic organisms provide a crucial coupling between the Sun's energy and metabolic processes supporting life on Earth. Searches for extraterrestrial life focus on seeking planets with similar incident light intensities and environments. However the impact of abnormal photon arrival times has not been considered. Here we present the counterintuitive result that broad classes of extreme alien light could support terrestrial bacterial life whereas sources more similar to our Sun might not. Our detailed microscopic model uses state-of-the-art empirical inputs including Atomic Force Microscopy (AFM) images. It predicts a highly nonlinear survivability for the basic lifeform Rsp. Photometricum whereby toxic photon feeds get converted into a benign metabolic energy supply by an interplay between the membrane's spatial structure and temporal excitation processes. More generally, our work suggests a new handle for manipulating terrestrial photosynthesis using currently-available extreme value statistics photon sources.

  12. Selection effects and binary galaxy velocity differences

    NASA Technical Reports Server (NTRS)

    Schneider, Stephen E.; Salpeter, Edwin E.

    1990-01-01

    Measurements of the velocity differences (delta v's) in pairs of galaxies from large statistical samples have often been used to estimate the average masses of binary galaxies. A basic prediction of these models is that the delta v distribution ought to decline monotonically. However, some peculiar aspects of the kinematics have been uncovered, with an anomalous preference for delta v approx. equal to 72 km s(sup-1) appearing to be present in the data. The authors examine a large sample of binary galaxies with accurate redshift measurements and confirm that the distribution of delta v's appears to be non-monotonic with peaks at 0 and approx. 72 km s (exp -1). The authors suggest that the non-zero peak results from the isolation criteria employed in defining samples of binaries and that it indicates there are two populations of binary orbits contributing to the observed delta v distribution.

  13. Dysfunctional Metacognitive Beliefs Are Associated with Decreased Executive Control

    PubMed Central

    Kraft, Brage; Jonassen, Rune; Stiles, Tore C.; Landrø, Nils. I.

    2017-01-01

    Dysfunctional metacognitive beliefs (“metacognitions”) and executive control are important factors in mental disorders such as depression and anxiety, but the relationship between these concepts has not been studied systematically. We examined whether there is an association between metacognitions and executive control and hypothesized that decreased executive control statistically predicts increased levels of metacognitions. Two hundred and ninety-nine individuals recruited from the general population and outpatient psychiatric clinics completed the Metacognitions Questionnaire-30 and three subtests from the Cambridge Neuropsychological Test Automated Battery corresponding to the three-component model of executive functions. Controlling for current depression and anxiety symptoms, decreased ability to shift between mental sets was associated with increased negative beliefs about the uncontrollability and danger of worry and beliefs about the need to control thoughts. The results suggest a basic association between metacognitions and executive control. Individual differences in executive control could prove important in the personalization of metacognitive therapy. PMID:28469590

  14. Zipf's word frequency law in natural language: a critical review and future directions.

    PubMed

    Piantadosi, Steven T

    2014-10-01

    The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf's law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf's law and are then used to evaluate many of the theoretical explanations of Zipf's law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf's law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data.

  15. Inferring epidemiological dynamics of infectious diseases using Tajima's D statistic on nucleotide sequences of pathogens.

    PubMed

    Kim, Kiyeon; Omori, Ryosuke; Ito, Kimihito

    2017-12-01

    The estimation of the basic reproduction number is essential to understand epidemic dynamics, and time series data of infected individuals are usually used for the estimation. However, such data are not always available. Methods to estimate the basic reproduction number using genealogy constructed from nucleotide sequences of pathogens have been proposed so far. Here, we propose a new method to estimate epidemiological parameters of outbreaks using the time series change of Tajima's D statistic on the nucleotide sequences of pathogens. To relate the time evolution of Tajima's D to the number of infected individuals, we constructed a parsimonious mathematical model describing both the transmission process of pathogens among hosts and the evolutionary process of the pathogens. As a case study we applied this method to the field data of nucleotide sequences of pandemic influenza A (H1N1) 2009 viruses collected in Argentina. The Tajima's D-based method estimated basic reproduction number to be 1.55 with 95% highest posterior density (HPD) between 1.31 and 2.05, and the date of epidemic peak to be 10th July with 95% HPD between 22nd June and 9th August. The estimated basic reproduction number was consistent with estimation by birth-death skyline plot and estimation using the time series of the number of infected individuals. These results suggested that Tajima's D statistic on nucleotide sequences of pathogens could be useful to estimate epidemiological parameters of outbreaks. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Testing prediction methods: Earthquake clustering versus the Poisson model

    USGS Publications Warehouse

    Michael, A.J.

    1997-01-01

    Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.

  17. Prediction of rainfall anomalies during the dry to wet transition season over the Southern Amazonia using machine learning tools

    NASA Astrophysics Data System (ADS)

    Shan, X.; Zhang, K.; Zhuang, Y.; Fu, R.; Hong, Y.

    2017-12-01

    Seasonal prediction of rainfall during the dry-to-wet transition season in austral spring (September-November) over southern Amazonia is central for improving planting crops and fire mitigation in that region. Previous studies have identified the key large-scale atmospheric dynamic and thermodynamics pre-conditions during the dry season (June-August) that influence the rainfall anomalies during the dry to wet transition season over Southern Amazonia. Based on these key pre-conditions during dry season, we have evaluated several statistical models and developed a Neural Network based statistical prediction system to predict rainfall during the dry to wet transition for Southern Amazonia (5-15°S, 50-70°W). Multivariate Empirical Orthogonal Function (EOF) Analysis is applied to the following four fields during JJA from the ECMWF Reanalysis (ERA-Interim) spanning from year 1979 to 2015: geopotential height at 200 hPa, surface relative humidity, convective inhibition energy (CIN) index and convective available potential energy (CAPE), to filter out noise and highlight the most coherent spatial and temporal variations. The first 10 EOF modes are retained for inputs to the statistical models, accounting for at least 70% of the total variance in the predictor fields. We have tested several linear and non-linear statistical methods. While the regularized Ridge Regression and Lasso Regression can generally capture the spatial pattern and magnitude of rainfall anomalies, we found that that Neural Network performs best with an accuracy greater than 80%, as expected from the non-linear dependence of the rainfall on the large-scale atmospheric thermodynamic conditions and circulation. Further tests of various prediction skill metrics and hindcasts also suggest this Neural Network prediction approach can significantly improve seasonal prediction skill than the dynamic predictions and regression based statistical predictions. Thus, this statistical prediction system could have shown potential to improve real-time seasonal rainfall predictions in the future.

  18. Metabolic networks evolve towards states of maximum entropy production.

    PubMed

    Unrean, Pornkamol; Srienc, Friedrich

    2011-11-01

    A metabolic network can be described by a set of elementary modes or pathways representing discrete metabolic states that support cell function. We have recently shown that in the most likely metabolic state the usage probability of individual elementary modes is distributed according to the Boltzmann distribution law while complying with the principle of maximum entropy production. To demonstrate that a metabolic network evolves towards such state we have carried out adaptive evolution experiments with Thermoanaerobacterium saccharolyticum operating with a reduced metabolic functionality based on a reduced set of elementary modes. In such reduced metabolic network metabolic fluxes can be conveniently computed from the measured metabolite secretion pattern. Over a time span of 300 generations the specific growth rate of the strain continuously increased together with a continuous increase in the rate of entropy production. We show that the rate of entropy production asymptotically approaches the maximum entropy production rate predicted from the state when the usage probability of individual elementary modes is distributed according to the Boltzmann distribution. Therefore, the outcome of evolution of a complex biological system can be predicted in highly quantitative terms using basic statistical mechanical principles. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Developing QSPR model of gas/particle partition coefficients of neutral poly-/perfluoroalkyl substances

    NASA Astrophysics Data System (ADS)

    Yuan, Quan; Ma, Guangcai; Xu, Ting; Serge, Bakire; Yu, Haiying; Chen, Jianrong; Lin, Hongjun

    2016-10-01

    Poly-/perfluoroalkyl substances (PFASs) are a class of synthetic fluorinated organic substances that raise increasing concern because of their environmental persistence, bioaccumulation and widespread presence in various environment media and organisms. PFASs can be released into the atmosphere through both direct and indirect sources, and the gas/particle partition coefficient (KP) is an important parameter that helps us to understand their atmospheric behavior. In this study, we developed a temperature-dependent predictive model for log KP of PFASs and analyzed the molecular mechanism that governs their partitioning equilibrium between gas phase and particle phase. All theoretical computation was carried out at B3LYP/6-31G (d, p) level based on neutral molecular structures by Gaussian 09 program package. The regression model has a good statistical performance and robustness. The application domain has also been defined according to OECD guidance. The mechanism analysis shows that electrostatic interaction and dispersion interaction play the most important role in the partitioning equilibrium. The developed model can be used to predict log KP values of neutral fluorotelomer alcohols and perfluor sulfonamides/sulfonamidoethanols with different substitutions at nitrogen atoms, providing basic data for their ecological risk assessment.

  20. Development and evaluation of predictive model for bovine serum albumin-water partition coefficients of neutral organic chemicals.

    PubMed

    Ma, Guangcai; Yuan, Quan; Yu, Haiying; Lin, Hongjun; Chen, Jianrong; Hong, Huachang

    2017-04-01

    The binding of organic chemicals to serum albumin can significantly reduce their unbound concentration in blood and affect their biological reactions. In this study, we developed a new QSAR model for bovine serum albumin (BSA) - water partition coefficients (K BSA/W ) of neutral organic chemicals with large structural variance, logK BSA/W values covering 3.5 orders of magnitude (1.19-4.76). All chemical geometries were optimized by semi-empirical PM6 algorithm. Several quantum chemical parameters that reflect various intermolecular interactions as well as hydrophobicity were selected to develop QSAR model. The result indicates the regression model derived from logK ow , the most positive net atomic charges on an atom, Connolly solvent excluded volume, polarizability, and Abraham acidity could explain the partitioning mechanism of organic chemicals between BSA and water. The simulated external validation and cross validation verifies the developed model has good statistical robustness and predictive ability, thus can be used to estimate the logK BSA/W values for chemicals in application domain, accordingly to provide basic data for the toxicity assessment of the chemicals. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. The Virtual Quake earthquake simulator: a simulation-based forecast of the El Mayor-Cucapah region and evidence of predictability in simulated earthquake sequences

    NASA Astrophysics Data System (ADS)

    Yoder, Mark R.; Schultz, Kasey W.; Heien, Eric M.; Rundle, John B.; Turcotte, Donald L.; Parker, Jay W.; Donnellan, Andrea

    2015-12-01

    In this manuscript, we introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert-based forecasting metric, and show that it exhibits significant information gain compared to random forecasts. We also discuss the long-standing question of activation versus quiescent type earthquake triggering. We show that VQ exhibits both behaviours separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California, USA and northern Baja California Norte, Mexico.

  2. Tissue Chips to aid drug development and modeling for rare diseases

    PubMed Central

    Low, Lucie A.; Tagle, Danilo A.

    2016-01-01

    Introduction The technologies used to design, create and use microphysiological systems (MPS, “tissue chips” or “organs-on-chips”) have progressed rapidly in the last 5 years, and validation studies of the functional relevance of these platforms to human physiology, and response to drugs for individual model organ systems, are well underway. These studies are paving the way for integrated multi-organ systems that can model diseases and predict drug efficacy and toxicology of multiple organs in real-time, improving the potential for diagnostics and development of novel treatments of rare diseases in the future. Areas covered This review will briefly summarize the current state of tissue chip research and highlight model systems where these microfabricated (or bioengineered) devices are already being used to screen therapeutics, model disease states, and provide potential treatments in addition to helping elucidate the basic molecular and cellular phenotypes of rare diseases. Expert opinion Microphysiological systems hold great promise and potential for modeling rare disorders, as well as for their potential use to enhance the predictive power of new drug therapeutics, plus potentially increase the statistical power of clinical trials while removing the inherent risks of these trials in rare disease populations. PMID:28626620

  3. Separation of time scales in one-dimensional directed nucleation-growth processes

    NASA Astrophysics Data System (ADS)

    Pierobon, Paolo; Miné-Hattab, Judith; Cappello, Giovanni; Viovy, Jean-Louis; Lagomarsino, Marco Cosentino

    2010-12-01

    Proteins involved in homologous recombination such as RecA and hRad51 polymerize on single- and double-stranded DNA according to a nucleation-growth kinetics, which can be monitored by single-molecule in vitro assays. The basic models currently used to extract biochemical rates rely on ensemble averages and are typically based on an underlying process of bidirectional polymerization, in contrast with the often observed anisotropic polymerization of similar proteins. For these reasons, if one considers single-molecule experiments, the available models are useful to understand observations only in some regimes. In particular, recent experiments have highlighted a steplike polymerization kinetics. The classical model of one-dimensional nucleation growth, the Kolmogorov-Avrami-Mehl-Johnson (KAMJ) model, predicts the correct polymerization kinetics only in some regimes and fails to predict the steplike behavior. This work illustrates by simulations and analytical arguments the limitation of applicability of the KAMJ description and proposes a minimal model for the statistics of the steps based on the so-called stick-breaking stochastic process. We argue that this insight might be useful to extract information on the time and length scales involved in the polymerization kinetics.

  4. External validation of ADO, DOSE, COTE and CODEX at predicting death in primary care patients with COPD using standard and machine learning approaches.

    PubMed

    Morales, Daniel R; Flynn, Rob; Zhang, Jianguo; Trucco, Emmanuel; Quint, Jennifer K; Zutis, Kris

    2018-05-01

    Several models for predicting the risk of death in people with chronic obstructive pulmonary disease (COPD) exist but have not undergone large scale validation in primary care. The objective of this study was to externally validate these models using statistical and machine learning approaches. We used a primary care COPD cohort identified using data from the UK Clinical Practice Research Datalink. Age-standardised mortality rates were calculated for the population by gender and discrimination of ADO (age, dyspnoea, airflow obstruction), COTE (COPD-specific comorbidity test), DOSE (dyspnoea, airflow obstruction, smoking, exacerbations) and CODEX (comorbidity, dyspnoea, airflow obstruction, exacerbations) at predicting death over 1-3 years measured using logistic regression and a support vector machine learning (SVM) method of analysis. The age-standardised mortality rate was 32.8 (95%CI 32.5-33.1) and 25.2 (95%CI 25.4-25.7) per 1000 person years for men and women respectively. Complete data were available for 54879 patients to predict 1-year mortality. ADO performed the best (c-statistic of 0.730) compared with DOSE (c-statistic 0.645), COTE (c-statistic 0.655) and CODEX (c-statistic 0.649) at predicting 1-year mortality. Discrimination of ADO and DOSE improved at predicting 1-year mortality when combined with COTE comorbidities (c-statistic 0.780 ADO + COTE; c-statistic 0.727 DOSE + COTE). Discrimination did not change significantly over 1-3 years. Comparable results were observed using SVM. In primary care, ADO appears superior at predicting death in COPD. Performance of ADO and DOSE improved when combined with COTE comorbidities suggesting better models may be generated with additional data facilitated using novel approaches. Copyright © 2018. Published by Elsevier Ltd.

  5. Prognostic factors in patients with advanced cancer: use of the patient-generated subjective global assessment in survival prediction.

    PubMed

    Martin, Lisa; Watanabe, Sharon; Fainsinger, Robin; Lau, Francis; Ghosh, Sunita; Quan, Hue; Atkins, Marlis; Fassbender, Konrad; Downing, G Michael; Baracos, Vickie

    2010-10-01

    To determine whether elements of a standard nutritional screening assessment are independently prognostic of survival in patients with advanced cancer. A prospective nested cohort of patients with metastatic cancer were accrued from different units of a Regional Palliative Care Program. Patients completed a nutritional screen on admission. Data included age, sex, cancer site, height, weight history, dietary intake, 13 nutrition impact symptoms, and patient- and physician-reported performance status (PS). Univariate and multivariate survival analyses were conducted. Concordance statistics (c-statistics) were used to test the predictive accuracy of models based on training and validation sets; a c-statistic of 0.5 indicates the model predicts the outcome as well as chance; perfect prediction has a c-statistic of 1.0. A training set of patients in palliative home care (n = 1,164) was used to identify prognostic variables. Primary disease site, PS, short-term weight change (either gain or loss), dietary intake, and dysphagia predicted survival in multivariate analysis (P < .05). A model including only patients separated by disease site and PS with high c-statistics between predicted and observed responses for survival in the training set (0.90) and validation set (0.88; n = 603). The addition of weight change, dietary intake, and dysphagia did not further improve the c-statistic of the model. The c-statistic was also not altered by substituting physician-rated palliative PS for patient-reported PS. We demonstrate a high probability of concordance between predicted and observed survival for patients in distinct palliative care settings (home care, tertiary inpatient, ambulatory outpatient) based on patient-reported information.

  6. How Coaches' Motivations Mediate Between Basic Psychological Needs and Well-Being/Ill-Being.

    PubMed

    Alcaraz, Saul; Torregrosa, Miquel; Viladrich, Carme

    2015-01-01

    The purpose of the present research was to test how behavioral regulations are mediated between basic psychological needs and psychological well-being and ill-being in a sample of team-sport coaches. Based on self-determination theory, we hypothesized a model where satisfaction and thwarting of the basic psychological needs predicted coaches' behavioral regulations, which in turn led them to experience well-being (i.e., subjective vitality, positive affect) or ill-being (i.e., perceived stress, negative affect). Three-hundred and two coaches participated in the study (Mage = 25.97 years; 82% male). For each instrument employed, the measurement model with the best psychometric properties was selected from a sequence of nested models sustained by previous research, including exploratory structural equation models and confirmatory factor analysis. These measurement models were included in 3 structural equation models to test for mediation: partial mediation, complete mediation, and absence of mediation. The results provided support for the partial mediation model. Coaches' motivation mediated the relationships from both relatedness need satisfaction and basic psychological needs thwarting for coaches' well-being. In contrast, relationships between basic psychological needs satisfaction and thwarting and ill-being were only predicted by direct effects. Our results highlight that 3 conditions seem necessary for coaches to experience psychological well-being in their teams: basic psychological needs satisfaction, especially relatedness; lack of basic psychological needs thwarting; and self-determined motivation.

  7. Seasonal Drought Prediction: Advances, Challenges, and Future Prospects

    NASA Astrophysics Data System (ADS)

    Hao, Zengchao; Singh, Vijay P.; Xia, Youlong

    2018-03-01

    Drought prediction is of critical importance to early warning for drought managements. This review provides a synthesis of drought prediction based on statistical, dynamical, and hybrid methods. Statistical drought prediction is achieved by modeling the relationship between drought indices of interest and a suite of potential predictors, including large-scale climate indices, local climate variables, and land initial conditions. Dynamical meteorological drought prediction relies on seasonal climate forecast from general circulation models (GCMs), which can be employed to drive hydrological models for agricultural and hydrological drought prediction with the predictability determined by both climate forcings and initial conditions. Challenges still exist in drought prediction at long lead time and under a changing environment resulting from natural and anthropogenic factors. Future research prospects to improve drought prediction include, but are not limited to, high-quality data assimilation, improved model development with key processes related to drought occurrence, optimal ensemble forecast to select or weight ensembles, and hybrid drought prediction to merge statistical and dynamical forecasts.

  8. Area computer model for transportation noise prediction : phase II--improved noise prediction methods.

    DOT National Transportation Integrated Search

    1975-01-01

    This report recommended that NOISE 3 initially use the same basic logic as the MICNOISE program for highway noise prediction except that additional options be made available, such as flexibility in specifying vehicle noise sources. A choice of six no...

  9. Training Data Requirement for a Neural Network to Predict Aerodynamic Coefficients

    NASA Technical Reports Server (NTRS)

    Korsmeyer, David (Technical Monitor); Rajkumar, T.; Bardina, Jorge

    2003-01-01

    Basic aerodynamic coefficients are modeled as functions of angle of attack, speed brake deflection angle, Mach number, and side slip angle. Most of the aerodynamic parameters can be well-fitted using polynomial functions. We previously demonstrated that a neural network is a fast, reliable way of predicting aerodynamic coefficients. We encountered few under fitted and/or over fitted results during prediction. The training data for the neural network are derived from wind tunnel test measurements and numerical simulations. The basic questions that arise are: how many training data points are required to produce an efficient neural network prediction, and which type of transfer functions should be used between the input-hidden layer and hidden-output layer. In this paper, a comparative study of the efficiency of neural network prediction based on different transfer functions and training dataset sizes is presented. The results of the neural network prediction reflect the sensitivity of the architecture, transfer functions, and training dataset size.

  10. A simple rain attenuation model for earth-space radio links operating at 10-35 GHz

    NASA Technical Reports Server (NTRS)

    Stutzman, W. L.; Yon, K. M.

    1986-01-01

    The simple attenuation model has been improved from an earlier version and now includes the effect of wave polarization. The model is for the prediction of rain attenuation statistics on earth-space communication links operating in the 10-35 GHz band. Simple calculations produce attenuation values as a function of average rain rate. These together with rain rate statistics (either measured or predicted) can be used to predict annual rain attenuation statistics. In this paper model predictions are compared to measured data from a data base of 62 experiments performed in the U.S., Europe, and Japan. Comparisons are also made to predictions from other models.

  11. A Correlational Study of the Relationships between Music Aptitude and Phonemic Awareness of Kindergarten Children

    ERIC Educational Resources Information Center

    Rubinson, Laura E.

    2010-01-01

    More than one third of American children cannot read at a basic level by fourth grade (Lee, Grigg, & Donahue, 2007) and those numbers are even higher for African American, Hispanic and poor White students (Boorman et al., 2007). These are alarming statistics given that the ability to read is the most basic and fundamental skill for academic…

  12. Availability of Instructional Materials at the Basic Education Level in Enugu Educational Zone of Enugu State, Nigeria

    ERIC Educational Resources Information Center

    Chukwu, Leo C.; Eze, Thecla A. Y.; Agada, Fidelia Chinyelugo

    2016-01-01

    The study examined the availability of instructional materials at the basic education level in Enugu Education Zone of Enugu State, Nigeria. One research question and one hypothesis guided the study. The research question was answered using mean and grand mean ratings, while the hypothesis was tested using t-test statistics at 0.05 level of…

  13. Analysis Code - Data Analysis in 'Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications' (LMSMIPNFA) v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John R

    R code that performs the analysis of a data set presented in the paper ‘Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications’ by Lewis, J., Zhang, A., Anderson-Cook, C. It provides functions for doing inverse predictions in this setting using several different statistical methods. The data set is a publicly available data set from a historical Plutonium production experiment.

  14. The Development of Statistical Models for Predicting Surgical Site Infections in Japan: Toward a Statistical Model-Based Standardized Infection Ratio.

    PubMed

    Fukuda, Haruhisa; Kuroki, Manabu

    2016-03-01

    To develop and internally validate a surgical site infection (SSI) prediction model for Japan. Retrospective observational cohort study. We analyzed surveillance data submitted to the Japan Nosocomial Infections Surveillance system for patients who had undergone target surgical procedures from January 1, 2010, through December 31, 2012. Logistic regression analyses were used to develop statistical models for predicting SSIs. An SSI prediction model was constructed for each of the procedure categories by statistically selecting the appropriate risk factors from among the collected surveillance data and determining their optimal categorization. Standard bootstrapping techniques were applied to assess potential overfitting. The C-index was used to compare the predictive performances of the new statistical models with those of models based on conventional risk index variables. The study sample comprised 349,987 cases from 428 participant hospitals throughout Japan, and the overall SSI incidence was 7.0%. The C-indices of the new statistical models were significantly higher than those of the conventional risk index models in 21 (67.7%) of the 31 procedure categories (P<.05). No significant overfitting was detected. Japan-specific SSI prediction models were shown to generally have higher accuracy than conventional risk index models. These new models may have applications in assessing hospital performance and identifying high-risk patients in specific procedure categories.

  15. Investigating the management performance of disinfection analysis of water distribution networks using data mining approaches.

    PubMed

    Zounemat-Kermani, Mohammad; Ramezani-Charmahineh, Abdollah; Adamowski, Jan; Kisi, Ozgur

    2018-06-13

    Chlorination, the basic treatment utilized for drinking water sources, is widely used for water disinfection and pathogen elimination in water distribution networks. Thereafter, the proper prediction of chlorine consumption is of great importance in water distribution network performance. In this respect, data mining techniques-which have the ability to discover the relationship between dependent variable(s) and independent variables-can be considered as alternative approaches in comparison to conventional methods (e.g., numerical methods). This study examines the applicability of three key methods, based on the data mining approach, for predicting chlorine levels in four water distribution networks. ANNs (artificial neural networks, including the multi-layer perceptron neural network, MLPNN, and radial basis function neural network, RBFNN), SVM (support vector machine), and CART (classification and regression tree) methods were used to estimate the concentration of residual chlorine in distribution networks for three villages in Kerman Province, Iran. Produced water (flow), chlorine consumption, and residual chlorine were collected daily for 3 years. An assessment of the studied models using several statistical criteria (NSC, RMSE, R 2 , and SEP) indicated that, in general, MLPNN has the greatest capability for predicting chlorine levels followed by CART, SVM, and RBF-ANN. Weaker performance of the data-driven methods in the water distribution networks, in some cases, could be attributed to improper chlorination management rather than the methods' capability.

  16. General statistics of stochastic process of gene expression in eukaryotic cells.

    PubMed Central

    Kuznetsov, V A; Knott, G D; Bonner, R F

    2002-01-01

    Thousands of genes are expressed at such very low levels (< or =1 copy per cell) that global gene expression analysis of rarer transcripts remains problematic. Ambiguity in identification of rarer transcripts creates considerable uncertainty in fundamental questions such as the total number of genes expressed in an organism and the biological significance of rarer transcripts. Knowing the distribution of the true number of genes expressed at each level and the corresponding gene expression level probability function (GELPF) could help resolve these uncertainties. We found that all observed large-scale gene expression data sets in yeast, mouse, and human cells follow a Pareto-like distribution model skewed by many low-abundance transcripts. A novel stochastic model of the gene expression process predicts the universality of the GELPF both across different cell types within a multicellular organism and across different organisms. This model allows us to predict the frequency distribution of all gene expression levels within a single cell and to estimate the number of expressed genes in a single cell and in a population of cells. A random "basal" transcription mechanism for protein-coding genes in all or almost all eukaryotic cell types is predicted. This fundamental mechanism might enhance the expression of rarely expressed genes and, thus, provide a basic level of phenotypic diversity, adaptability, and random monoallelic expression in cell populations. PMID:12136033

  17. Basic statistics with Microsoft Excel: a review.

    PubMed

    Divisi, Duilio; Di Leonardo, Gabriella; Zaccagna, Gino; Crisci, Roberto

    2017-06-01

    The scientific world is enriched daily with new knowledge, due to new technologies and continuous discoveries. The mathematical functions explain the statistical concepts particularly those of mean, median and mode along with those of frequency and frequency distribution associated to histograms and graphical representations, determining elaborative processes on the basis of the spreadsheet operations. The aim of the study is to highlight the mathematical basis of statistical models that regulate the operation of spreadsheets in Microsoft Excel.

  18. Basic statistics with Microsoft Excel: a review

    PubMed Central

    Di Leonardo, Gabriella; Zaccagna, Gino; Crisci, Roberto

    2017-01-01

    The scientific world is enriched daily with new knowledge, due to new technologies and continuous discoveries. The mathematical functions explain the statistical concepts particularly those of mean, median and mode along with those of frequency and frequency distribution associated to histograms and graphical representations, determining elaborative processes on the basis of the spreadsheet operations. The aim of the study is to highlight the mathematical basis of statistical models that regulate the operation of spreadsheets in Microsoft Excel. PMID:28740690

  19. A. C. C. Fact Book: A Statistical Profile of Allegany Community College and the Community It Serves.

    ERIC Educational Resources Information Center

    Andersen, Roger C.

    This document is intended to be an authoritative compilation of frequently referenced basic facts concerning Allegany Community College (ACC) in Maryland. It is a statistical profile of ACC and the community it serves, divided into six sections: enrollment, students, faculty, community, support services, and general college related information.…

  20. The Structure of Research Methodology Competency in Higher Education and the Role of Teaching Teams and Course Temporal Distance

    ERIC Educational Resources Information Center

    Schweizer, Karl; Steinwascher, Merle; Moosbrugger, Helfried; Reiss, Siegbert

    2011-01-01

    The development of research methodology competency is a major aim of the psychology curriculum at universities. Usually, three courses concentrating on basic statistics, advanced statistics and experimental methods, respectively, serve the achievement of this aim. However, this traditional curriculum-based course structure gives rise to the…

  1. Ten Ways to Improve the Use of Statistical Mediation Analysis in the Practice of Child and Adolescent Treatment Research

    ERIC Educational Resources Information Center

    Maric, Marija; Wiers, Reinout W.; Prins, Pier J. M.

    2012-01-01

    Despite guidelines and repeated calls from the literature, statistical mediation analysis in youth treatment outcome research is rare. Even more concerning is that many studies that "have" reported mediation analyses do not fulfill basic requirements for mediation analysis, providing inconclusive data and clinical implications. As a result, after…

  2. Statistical estimators for monitoring spotted owls in Oregon and Washington in 1987.

    Treesearch

    Tlmothy A. Max; Ray A. Souter; Kathleen A. O' Halloran

    1990-01-01

    Spotted owls (Strix occidentalis) were monitored on 11 National Forests in the Pacific Northwest Region of the USDA Forest Service between March and August of 1987. The basic intent of monitoring was to provide estimates of occupancy and reproduction rates for pairs of spotted owls. This paper documents the technical details of the statistical...

  3. Statistical techniques for sampling and monitoring natural resources

    Treesearch

    Hans T. Schreuder; Richard Ernst; Hugo Ramirez-Maldonado

    2004-01-01

    We present the statistical theory of inventory and monitoring from a probabilistic point of view. We start with the basics and show the interrelationships between designs and estimators illustrating the methods with a small artificial population as well as with a mapped realistic population. For such applications, useful open source software is given in Appendix 4....

  4. Peer-Assisted Learning in Research Methods and Statistics

    ERIC Educational Resources Information Center

    Stone, Anna; Meade, Claire; Watling, Rosamond

    2012-01-01

    Feedback from students on a Level 1 Research Methods and Statistics module, studied as a core part of a BSc Psychology programme, highlighted demand for additional tutorials to help them to understand basic concepts. Students in their final year of study commonly request work experience to enhance their employability. All students on the Level 1…

  5. Adult Basic and Secondary Education Program Statistics. Fiscal Year 1976.

    ERIC Educational Resources Information Center

    Cain, Sylvester H.; Whalen, Barbara A.

    Reports submitted to the National Center for Education Statistics provided data for this compilation and tabulation of data on adult participants in U.S. educational programs in fiscal year 1976. In the summary section introducing the charts, it is noted that adult education programs funded under P.L. 91-230 served over 1.6 million persons--an…

  6. The Education Almanac, 1987-1988. Facts and Figures about Our Nation's System of Education. Third Edition.

    ERIC Educational Resources Information Center

    Goodman, Leroy V., Ed.

    This is the third edition of the Education Almanac, an assemblage of statistics, facts, commentary, and basic background information about the conduct of schools in the United States. Features of this variegated volume include an introductory section on "Education's Newsiest Developments," followed by some vital educational statistics, a set of…

  7. Theory of Financial Risk and Derivative Pricing

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe; Potters, Marc

    2009-01-01

    Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.

  8. Theory of Financial Risk and Derivative Pricing - 2nd Edition

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe; Potters, Marc

    2003-12-01

    Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.

  9. Utility of the Care Dependency Scale in predicting care needs and health risks of elderly patients admitted to a geriatric unit: a cross-sectional study of 200 consecutive patients.

    PubMed

    Doroszkiewicz, Halina; Sierakowska, Matylda; Muszalik, Marta

    2018-01-01

    The aim of the study was to evaluate the usefulness of the Polish version of the Care Dependency Scale (CDS) in predicting care needs and health risks of elderly patients admitted to a geriatric unit. This was a cross-sectional study of 200 geriatric patients aged ≥60 years, chronologically admitted to a geriatrics unit in Poland. The study was carried out using the Polish version of the CDS questionnaire to evaluate biopsychosocial needs and the level of care dependency. The mean age of the participating geriatric patients was 81.8±6.6. The mean result of the sum of the CDS index for all the participants was 55.3±15.1. Detailed analysis of the results of evaluation of the respondents' functional condition showed statistically significant differences in the levels of care dependency. Evaluation of the patients' physical performance in terms of the ability to do basic activities of daily living (ADL) and instrumental ADL (I-ADL) showed statistically significant differences between the levels of care dependency. Patients with high dependency were more often prone to pressure ulcers - 13.1±3.3, falls (87.2%), poorer emotional state - 6.9±3.6, mental function - 5.1±2.8, and more often problems with locomotion, vision, and hearing. The results showed that locomotive disability, depression, advanced age, and problem with vision and hearing are connected with increasing care dependency. CDS evaluation of each admitted geriatric patient enables us to predict the care needs and health risks that need to be reduced and the disease states to be improved. CDS evaluation should be accompanied by the use of other instruments and assessments to evaluate pressure ulcer risk, fall risk, and actions toward the improvement of subjective well-being, as well as correction of vision and hearing problems where possible and assistive devices for locomotion.

  10. Online incidental statistical learning of audiovisual word sequences in adults: a registered report.

    PubMed

    Kuppuraj, Sengottuvel; Duta, Mihaela; Thompson, Paul; Bishop, Dorothy

    2018-02-01

    Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory-picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test-retest reliability ( r  = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process.

  11. Online incidental statistical learning of audiovisual word sequences in adults: a registered report

    PubMed Central

    Duta, Mihaela; Thompson, Paul

    2018-01-01

    Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory–picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test–retest reliability (r = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process. PMID:29515876

  12. Do different types of school mathematics development depend on different constellations of numerical versus general cognitive abilities?

    PubMed

    Fuchs, Lynn S; Geary, David C; Compton, Donald L; Fuchs, Douglas; Hamlett, Carol L; Seethaler, Pamela M; Bryant, Joan D; Schatschneider, Christopher

    2010-11-01

    The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (N = 280; mean age = 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations, and word problems in fall and then reassessed on procedural calculations and word problems in spring. Development was indexed by latent change scores, and the interplay between numerical and domain-general abilities was analyzed by multiple regression. Results suggest that the development of different types of formal school mathematics depends on different constellations of numerical versus general cognitive abilities. When controlling for 8 domain-general abilities, both aspects of basic numerical cognition were uniquely predictive of procedural calculations and word problems development. Yet, for procedural calculations development, the additional amount of variance explained by the set of domain-general abilities was not significant, and only counting span was uniquely predictive. By contrast, for word problems development, the set of domain-general abilities did provide additional explanatory value, accounting for about the same amount of variance as the basic numerical cognition variables. Language, attentive behavior, nonverbal problem solving, and listening span were uniquely predictive.

  13. Do Different Types of School Mathematics Development Depend on Different Constellations of Numerical versus General Cognitive Abilities?

    PubMed Central

    Fuchs, Lynn S.; Geary, David C.; Compton, Donald L.; Fuchs, Douglas; Hamlett, Carol L.; Seethaler, Pamela M.; Bryant, Joan D.; Schatschneider, Christopher

    2010-01-01

    The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (n=280; 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations (PCs), and word problems (WPs) in fall and then reassessed on PCs and WPs in spring. Development was indexed via latent change scores, and the interplay between numerical and domain-general abilities was analyzed via multiple regression. Results suggest that the development of different types of formal school mathematics depends on different constellations of numerical versus general cognitive abilities. When controlling for 8 domain-general abilities, both aspects of basic numerical cognition were uniquely predictive of PC and WP development. Yet, for PC development, the additional amount of variance explained by the set of domain-general abilities was not significant, and only counting span was uniquely predictive. By contrast, for WP development, the set of domain- general abilities did provide additional explanatory value, accounting for about the same amount of variance as the basic numerical cognition variables. Language, attentive behavior, nonverbal problem solving, and listening span were uniquely predictive. PMID:20822213

  14. Basic Weather Facts Study Texts for Students.

    ERIC Educational Resources Information Center

    Ontario Ministry of the Environment, Toronto.

    This pamphlet offers information to teachers and students concerning basic facts about weather and how to construct simple weather measurement devices. Directions, necessary materials, procedures, and instructions for use are given for four weather predicting instruments: wind vane, rain gauge, barometer, anemometer. Information is provided on…

  15. How Online Basic Psychological Need Satisfaction Influences Self-Disclosure Online among Chinese Adolescents: Moderated Mediation Effect of Exhibitionism and Narcissism.

    PubMed

    Liu, Ying; Liu, Ru-De; Ding, Yi; Wang, Jia; Zhen, Rui; Xu, Le

    2016-01-01

    Under the basic framework of self-determination theory, the present study examined a moderated mediation model in which exhibitionism mediated the relationship between online basic psychological need satisfaction and self-disclosure on the mobile Internet, and this mediation effect was moderated by narcissism. A total of 296 Chinese middle school students participated in this research. The results revealed that exhibitionism fully mediated the association between online competence need satisfaction and self-disclosure on the mobile net, and partly mediated the association between online relatedness need satisfaction and self-disclosure on the mobile net. The mediating path from online basic psychological need satisfaction (competence and relatedness) to exhibitionism was moderated by narcissism. Compared to the low level of narcissism, online competence need satisfaction had a stronger predictive power on exhibitionism under the high level of narcissism condition. In contrast, online relatedness need satisfaction had a weaker predictive power on exhibitionism.

  16. How Online Basic Psychological Need Satisfaction Influences Self-Disclosure Online among Chinese Adolescents: Moderated Mediation Effect of Exhibitionism and Narcissism

    PubMed Central

    Liu, Ying; Liu, Ru-De; Ding, Yi; Wang, Jia; Zhen, Rui; Xu, Le

    2016-01-01

    Under the basic framework of self-determination theory, the present study examined a moderated mediation model in which exhibitionism mediated the relationship between online basic psychological need satisfaction and self-disclosure on the mobile Internet, and this mediation effect was moderated by narcissism. A total of 296 Chinese middle school students participated in this research. The results revealed that exhibitionism fully mediated the association between online competence need satisfaction and self-disclosure on the mobile net, and partly mediated the association between online relatedness need satisfaction and self-disclosure on the mobile net. The mediating path from online basic psychological need satisfaction (competence and relatedness) to exhibitionism was moderated by narcissism. Compared to the low level of narcissism, online competence need satisfaction had a stronger predictive power on exhibitionism under the high level of narcissism condition. In contrast, online relatedness need satisfaction had a weaker predictive power on exhibitionism. PMID:27616999

  17. Simulation of laser beam reflection at the sea surface

    NASA Astrophysics Data System (ADS)

    Schwenger, Frédéric; Repasi, Endre

    2011-05-01

    A 3D simulation of the reflection of a Gaussian shaped laser beam on the dynamic sea surface is presented. The simulation is suitable for both the calculation of images of SWIR (short wave infrared) imaging sensor and for determination of total detected power of reflected laser light for a bistatic configuration of laser source and receiver at different atmospheric conditions. Our computer simulation comprises the 3D simulation of a maritime scene (open sea/clear sky) and the simulation of laser light reflected at the sea surface. The basic sea surface geometry is modeled by a composition of smooth wind driven gravity waves. The propagation model for water waves is applied for sea surface animation. To predict the view of a camera in the spectral band SWIR the sea surface radiance must be calculated. This is done by considering the emitted sea surface radiance and the reflected sky radiance, calculated by MODTRAN. Additionally, the radiances of laser light specularly reflected at the wind-roughened sea surface are modeled in the SWIR band considering an analytical statistical sea surface BRDF (bidirectional reflectance distribution function). This BRDF model considers the statistical slope statistics of waves and accounts for slope-shadowing of waves that especially occurs at flat incident angles of the laser beam and near horizontal detection angles of reflected irradiance at rough seas. Simulation results are presented showing the variation of the detected laser power dependent on the geometric configuration of laser, sensor and wind characteristics.

  18. Basic concepts and techniques of dental implants.

    PubMed

    Tagliareni, Jonathan M; Clarkson, Earl

    2015-04-01

    Dental implants provide completely edentulous and partial edentulous patients the function and esthetics they had with natural dentition. It is critical to understand and apply predictable surgical principles when treatment planning and surgically restoring edentulous spaces with implants. This article defines basic implant concepts that should be meticulously followed for predictable results when treating patients and restoring dental implants. Topics include biological and functional considerations, biomechanical considerations, preoperative assessments, medical history and risk assessments, oral examinations, radiographic examinations, contraindications, and general treatment planning options. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Facts about Folic Acid

    MedlinePlus

    ... Surveillance References Birth Defects COUNT Data & Statistics Research Articles & Key Findings About Us Partners Links to Other Websites Information For… Media Policy Makers Folic Acid Basics Language: English (US) ...

  20. A new grading system focusing on neurological outcomes for brain metastases treated with stereotactic radiosurgery: the modified Basic Score for Brain Metastases.

    PubMed

    Serizawa, Toru; Higuchi, Yoshinori; Nagano, Osamu; Matsuda, Shinji; Ono, Junichi; Saeki, Naokatsu; Hirai, Tatsuo; Miyakawa, Akifumi; Shibamoto, Yuta

    2014-12-01

    The Basic Score for Brain Metastases (BSBM) proposed by Lorenzoni and colleagues is one of the best grading systems for predicting survival periods after stereotactic radiosurgery (SRS) for brain metastases. However, it includes no brain factors and cannot predict neurological outcomes, such as preservation of neurological function and prevention of neurological death. Herein, the authors propose a modified BSBM, adding 4 brain factors to the original BSBM, enabling prediction of neurological outcomes, as well as of overall survival, in patients undergoing SRS. To serve as neurological prognostic scores (NPSs), the authors scored 4 significant brain factors for both preservation of neurological function (qualitative survival) and prevention of neurological death (neurological survival) as 0 or 1 as described in the following: > 10 brain tumors = 0 or ≤ 10 = 1, total tumor volume > 15 cm(3) = 0 or ≤ 15 cm(3) = 1, MRI findings of localized meningeal dissemination (yes = 0 or no = 1), and neurological symptoms (yes = 0 or no = 1). According to the sum of NPSs, patients were classified into 2 subgroups: Subgroup A with a total NPS of 3 or 4 and Subgroup B with an NPS of 0, 1, or 2. The authors defined the modified BSBM according to the NPS subgroup classification applied to the original BSBM groups. The validity of this modified BSBM in 2838 consecutive patients with brain metastases treated with SRS was verified. Patients included 1868 with cancer of the lung (including 1604 with non-small cell lung cancer), 355 of the gastrointestinal tract, 305 of the breast, 176 of the urogenital tract, and 134 with other cancers. Subgroup A had 2089 patients and Subgroup B 749. Median overall survival times were 2.6 months in BSBM 0 (382 patients), 5.7 in BSBM 1 (1143), 11.4 in BSBM 2 (1011) and 21.7 in BSBM 3 (302), and pairwise differences between the BSBM groups were statistically significant (all p < 0.0001). One-year qualitative survival rates were 64.6% (modified BSBM 0A, 204 patients), 45.0% (0B, 178), 82.5% (1A, 825), 63.3% (1B, 318), 86.4% (2A, 792), 73.7% (2B, 219), 91.4% (3A, 268), and 73.5% (3B, 34). One-year neurological survival rates were 82.6% (0A), 52.4% (0B), 90.5% (1A), 78.1% (1B), 91.1% (2A), 83.2% (2B), 93.9% (3A), and 76.3% (3B), where A and B identify the subgroup. Statistically significant differences in both qualitative and neurological survivals between Subgroups A and B were detected in all BSBM groups. The authors' new index, the modified BSBM, was found to be excellent for predicting neurological outcomes, independently of life expectancy, in SRS-treated patients with brain metastases.

  1. The proposed 'concordance-statistic for benefit' provided a useful metric when modeling heterogeneous treatment effects.

    PubMed

    van Klaveren, David; Steyerberg, Ewout W; Serruys, Patrick W; Kent, David M

    2018-02-01

    Clinical prediction models that support treatment decisions are usually evaluated for their ability to predict the risk of an outcome rather than treatment benefit-the difference between outcome risk with vs. without therapy. We aimed to define performance metrics for a model's ability to predict treatment benefit. We analyzed data of the Synergy between Percutaneous Coronary Intervention with Taxus and Cardiac Surgery (SYNTAX) trial and of three recombinant tissue plasminogen activator trials. We assessed alternative prediction models with a conventional risk concordance-statistic (c-statistic) and a novel c-statistic for benefit. We defined observed treatment benefit by the outcomes in pairs of patients matched on predicted benefit but discordant for treatment assignment. The 'c-for-benefit' represents the probability that from two randomly chosen matched patient pairs with unequal observed benefit, the pair with greater observed benefit also has a higher predicted benefit. Compared to a model without treatment interactions, the SYNTAX score II had improved ability to discriminate treatment benefit (c-for-benefit 0.590 vs. 0.552), despite having similar risk discrimination (c-statistic 0.725 vs. 0.719). However, for the simplified stroke-thrombolytic predictive instrument (TPI) vs. the original stroke-TPI, the c-for-benefit (0.584 vs. 0.578) was similar. The proposed methodology has the potential to measure a model's ability to predict treatment benefit not captured with conventional performance metrics. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Learning predictive statistics from temporal sequences: Dynamics and strategies.

    PubMed

    Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E; Kourtzi, Zoe

    2017-10-01

    Human behavior is guided by our expectations about the future. Often, we make predictions by monitoring how event sequences unfold, even though such sequences may appear incomprehensible. Event structures in the natural environment typically vary in complexity, from simple repetition to complex probabilistic combinations. How do we learn these structures? Here we investigate the dynamics of structure learning by tracking human responses to temporal sequences that change in structure unbeknownst to the participants. Participants were asked to predict the upcoming item following a probabilistic sequence of symbols. Using a Markov process, we created a family of sequences, from simple frequency statistics (e.g., some symbols are more probable than others) to context-based statistics (e.g., symbol probability is contingent on preceding symbols). We demonstrate the dynamics with which individuals adapt to changes in the environment's statistics-that is, they extract the behaviorally relevant structures to make predictions about upcoming events. Further, we show that this structure learning relates to individual decision strategy; faster learning of complex structures relates to selection of the most probable outcome in a given context (maximizing) rather than matching of the exact sequence statistics. Our findings provide evidence for alternate routes to learning of behaviorally relevant statistics that facilitate our ability to predict future events in variable environments.

  3. SEDIDAT: A BASIC program for the collection and statistical analysis of particle settling velocity data

    NASA Astrophysics Data System (ADS)

    Wright, Robyn; Thornberg, Steven M.

    SEDIDAT is a series of compiled IBM-BASIC (version 2.0) programs that direct the collection, statistical calculation, and graphic presentation of particle settling velocity and equivalent spherical diameter for samples analyzed using the settling tube technique. The programs follow a menu-driven format that is understood easily by students and scientists with little previous computer experience. Settling velocity is measured directly (cm,sec) and also converted into Chi units. Equivalent spherical diameter (reported in Phi units) is calculated using a modified Gibbs equation for different particle densities. Input parameters, such as water temperature, settling distance, particle density, run time, and Phi;Chi interval are changed easily at operator discretion. Optional output to a dot-matrix printer includes a summary of moment and graphic statistical parameters, a tabulation of individual and cumulative weight percents, a listing of major distribution modes, and cumulative and histogram plots of a raw time, settling velocity. Chi and Phi data.

  4. [Comment on] Statistical discrimination

    NASA Astrophysics Data System (ADS)

    Chinn, Douglas

    In the December 8, 1981, issue of Eos, a news item reported the conclusion of a National Research Council study that sexual discrimination against women with Ph.D.'s exists in the field of geophysics. Basically, the item reported that even when allowances are made for motherhood the percentage of female Ph.D.'s holding high university and corporate positions is significantly lower than the percentage of male Ph.D.'s holding the same types of positions. The sexual discrimination conclusion, based only on these statistics, assumes that there are no basic psychological differences between men and women that might cause different populations in the employment group studied. Therefore, the reasoning goes, after taking into account possible effects from differences related to anatomy, such as women stopping their careers in order to bear and raise children, the statistical distributions of positions held by male and female Ph.D.'s ought to be very similar to one another. Any significant differences between the distributions must be caused primarily by sexual discrimination.

  5. Statistical validation of predictive TRANSP simulations of baseline discharges in preparation for extrapolation to JET D-T

    NASA Astrophysics Data System (ADS)

    Kim, Hyun-Tae; Romanelli, M.; Yuan, X.; Kaye, S.; Sips, A. C. C.; Frassinetti, L.; Buchanan, J.; Contributors, JET

    2017-06-01

    This paper presents for the first time a statistical validation of predictive TRANSP simulations of plasma temperature using two transport models, GLF23 and TGLF, over a database of 80 baseline H-mode discharges in JET-ILW. While the accuracy of the predicted T e with TRANSP-GLF23 is affected by plasma collisionality, the dependency of predictions on collisionality is less significant when using TRANSP-TGLF, indicating that the latter model has a broader applicability across plasma regimes. TRANSP-TGLF also shows a good matching of predicted T i with experimental measurements allowing for a more accurate prediction of the neutron yields. The impact of input data and assumptions prescribed in the simulations are also investigated in this paper. The statistical validation and the assessment of uncertainty level in predictive TRANSP simulations for JET-ILW-DD will constitute the basis for the extrapolation to JET-ILW-DT experiments.

  6. Proceedings of the NASTRAN (Tradename) Users’ Colloquium (15th) Held in Kansas City, Missouri on 4-8 May 1987

    DTIC Science & Technology

    1987-08-01

    HVAC duct hanger system over an extensive frequency range. The finite element, component mode synthesis, and statistical energy analysis methods are...800-5,000 Hz) analysis was conducted with Statistical Energy Analysis (SEA) coupled with a closed-form harmonic beam analysis program. These...resonances may be obtained by using a finer frequency increment. Statistical Energy Analysis The basic assumption used in SEA analysis is that within each band

  7. Statistical analyses on sandstones: Systematic approach for predicting petrographical and petrophysical properties

    NASA Astrophysics Data System (ADS)

    Stück, H. L.; Siegesmund, S.

    2012-04-01

    Sandstones are a popular natural stone due to their wide occurrence and availability. The different applications for these stones have led to an increase in demand. From the viewpoint of conservation and the natural stone industry, an understanding of the material behaviour of this construction material is very important. Sandstones are a highly heterogeneous material. Based on statistical analyses with a sufficiently large dataset, a systematic approach to predicting the material behaviour should be possible. Since the literature already contains a large volume of data concerning the petrographical and petrophysical properties of sandstones, a large dataset could be compiled for the statistical analyses. The aim of this study is to develop constraints on the material behaviour and especially on the weathering behaviour of sandstones. Approximately 300 samples from historical and presently mined natural sandstones in Germany and ones described worldwide were included in the statistical approach. The mineralogical composition and fabric characteristics were determined from detailed thin section analyses and descriptions in the literature. Particular attention was paid to evaluating the compositional and textural maturity, grain contact respectively contact thickness, type of cement, degree of alteration and the intergranular volume. Statistical methods were used to test for normal distributions and calculating the linear regression of the basic petrophysical properties of density, porosity, water uptake as well as the strength. The sandstones were classified into three different pore size distributions and evaluated with the other petrophysical properties. Weathering behavior like hygric swelling and salt loading tests were also included. To identify similarities between individual sandstones or to define groups of specific sandstone types, principle component analysis, cluster analysis and factor analysis were applied. Our results show that composition and porosity evolution during diagenesis is a very important control on the petrophysical properties of a building stone. The relationship between intergranular volume, cementation and grain contact, can also provide valuable information to predict the strength properties. Since the samples investigated mainly originate from the Triassic German epicontinental basin, arkoses and feldspar-arenites are underrepresented. In general, the sandstones can be grouped as follows: i) quartzites, highly mature with a primary porosity of about 40%, ii) quartzites, highly mature, showing a primary porosity of 40% but with early clay infiltration, iii) sublitharenites-lithic arenites exhibiting a lower primary porosity, higher cementation with quartz and Fe-oxides ferritic and iv) sublitharenites-lithic arenites with a higher content of pseudomatrix. However, in the last two groups the feldspar and lithoclasts can also show considerable alteration. All sandstone groups differ with respect to the pore space and strength data, as well as water uptake properties, which were obtained by linear regression analysis. Similar petrophysical properties are discernible for each type when using principle component analysis. Furthermore, strength as well as the porosity of sandstones shows distinct differences considering their stratigraphic ages and the compositions. The relationship between porosity, strength as well as salt resistance could also be verified. Hygric swelling shows an interrelation to pore size type, porosity and strength but also to the degree of alteration (e.g. lithoclasts, pseudomatrix). To summarize, the different regression analyses and the calculated confidence regions provide a significant tool to classify the petrographical and petrophysical parameters of sandstones. Based on this, the durability and the weathering behavior of the sandstone groups can be constrained. Keywords: sandstones, petrographical & petrophysical properties, predictive approach, statistical investigation

  8. Columbia/Willamette Skill Builders Consortium. Final Performance Report. Appendix 5B Anodizing Inc. (Aluminum Extrusion Manufacturing). Basic Measurement Math. Instructors' Reports and Sample Curriculum Materials.

    ERIC Educational Resources Information Center

    Taylor, Marjorie; And Others

    Anodizing, Inc., Teamsters Local 162, and Mt. Hood Community College (Oregon) developed a workplace literacy program for workers at Anodizing. These workers did not have the basic skill competencies to benefit from company training efforts in statistical process control and quality assurance and were not able to advance to lead and supervisory…

  9. Opportunities Unlimited: Minnesota Indians Adult Basic Education; Narrative and Statistical Evaluation Third Year 1971-72, with a Review of the First and Second Years.

    ERIC Educational Resources Information Center

    Vizenor, Gerald

    Opportunities Unlimited is a State-wide program to provide adult basic education (ABE) and training for Indians on Minnesota reservations and in Indian communities. An administrative center in Bemidji serves communities on the Red Lake, White Earth, and Leech Lake Reservations, and a Duluth center provides ABE and training for communities on the…

  10. A quantitative comparison of corrective and perfective maintenance

    NASA Technical Reports Server (NTRS)

    Henry, Joel; Cain, James

    1994-01-01

    This paper presents a quantitative comparison of corrective and perfective software maintenance activities. The comparison utilizes basic data collected throughout the maintenance process. The data collected are extensive and allow the impact of both types of maintenance to be quantitatively evaluated and compared. Basic statistical techniques test relationships between and among process and product data. The results show interesting similarities and important differences in both process and product characteristics.

  11. The Relationships between the Iowa Test of Basic Skills and the Washington Assessment of Student Learning in the State of Washington. Technical Report.

    ERIC Educational Resources Information Center

    Joireman, Jeff; Abbott, Martin L.

    This report examines the overlap between student test results on the Iowa Test of Basic Skills (ITBS) and the Washington Assessment of Student Learning (WASL). The two tests were compared and contrasted in terms of content and measurement philosophy, and analyses studied the statistical relationship between the ITBS and the WASL. The ITBS assesses…

  12. Fundamentals in Biostatistics for Research in Pediatric Dentistry: Part I - Basic Concepts.

    PubMed

    Garrocho-Rangel, J A; Ruiz-Rodríguez, M S; Pozos-Guillén, A J

    The purpose of this report was to provide the reader with some basic concepts in order to better understand the significance and reliability of the results of any article on Pediatric Dentistry. Currently, Pediatric Dentists need the best evidence available in the literature on which to base their diagnoses and treatment decisions for the children's oral care. Basic understanding of Biostatistics plays an important role during the entire Evidence-Based Dentistry (EBD) process. This report describes Biostatistics fundamentals in order to introduce the basic concepts used in statistics, such as summary measures, estimation, hypothesis testing, effect size, level of significance, p value, confidence intervals, etc., which are available to Pediatric Dentists interested in reading or designing original clinical or epidemiological studies.

  13. Validation of CRASH Model in Prediction of 14-day Mortality and 6-month Unfavorable Outcome of Head Trauma Patients

    PubMed Central

    Hashemi, Behrooz; Amanat, Mahnaz; Baratloo, Alireza; Forouzanfar, Mohammad Mehdi; Rahmati, Farhad; Motamedi, Maryam; Safari, Saeed

    2016-01-01

    Introduction: To date, many prognostic models have been proposed to predict the outcome of patients with traumatic brain injuries. External validation of these models in different populations is of great importance for their generalization. The present study was designed, aiming to determine the value of CRASH prognostic model in prediction of 14-day mortality (14-DM) and 6-month unfavorable outcome (6-MUO) of patients with traumatic brain injury. Methods: In the present prospective diagnostic test study, calibration and discrimination of CRASH model were evaluated in head trauma patients referred to the emergency department. Variables required for calculating CRASH expected risks (ER), and observed 14-DM and 6-MUO were gathered. Then ER of 14-DM and 6-MUO were calculated. The patients were followed for 6 months and their 14-DM and 6-MUO were recorded. Finally, the correlation of CRASH ER and the observed outcome of the patients was evaluated. The data were analyzed using STATA version 11.0. Results: In this study, 323 patients with the mean age of 34.0 ± 19.4 years were evaluated (87.3% male). Calibration of the basic and CT models in prediction of 14-day and 6-month outcome were in the desirable range (P < 0.05). Area under the curve in the basic model for prediction of 14-DM and 6-MUO were 0.92 (95% CI: 0.89-0.96) and 0.92 (95% CI: 0.90-0.95), respectively. In addition, area under the curve in the CT model for prediction of 14-DM and 6-MUO were 0.93 (95% CI: 0.91-0.97) and 0.93 (95% CI: 0.91-0.96), respectively. There was no significant difference between the discriminations of the two models in prediction of 14-DM (p = 0.11) and 6-MUO (p = 0.1). Conclusion: The results of the present study showed that CRASH prediction model has proper discrimination and calibration in predicting 14-DM and 6-MUO of head trauma patients. Since there was no difference between the values of the basic and CT models, using the basic model is recommended to simplify the risk calculations. PMID:27800540

  14. Predicting lettuce canopy photosynthesis with statistical and neural network models

    NASA Technical Reports Server (NTRS)

    Frick, J.; Precetti, C.; Mitchell, C. A.

    1998-01-01

    An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future).

  15. Computer programs for computing particle-size statistics of fluvial sediments

    USGS Publications Warehouse

    Stevens, H.H.; Hubbell, D.W.

    1986-01-01

    Two versions of computer programs for inputing data and computing particle-size statistics of fluvial sediments are presented. The FORTRAN 77 language versions are for use on the Prime computer, and the BASIC language versions are for use on microcomputers. The size-statistics program compute Inman, Trask , and Folk statistical parameters from phi values and sizes determined for 10 specified percent-finer values from inputed size and percent-finer data. The program also determines the percentage gravel, sand, silt, and clay, and the Meyer-Peter effective diameter. Documentation and listings for both versions of the programs are included. (Author 's abstract)

  16. Statistical inference of the generation probability of T-cell receptors from sequence repertoires.

    PubMed

    Murugan, Anand; Mora, Thierry; Walczak, Aleksandra M; Callan, Curtis G

    2012-10-02

    Stochastic rearrangement of germline V-, D-, and J-genes to create variable coding sequence for certain cell surface receptors is at the origin of immune system diversity. This process, known as "VDJ recombination", is implemented via a series of stochastic molecular events involving gene choices and random nucleotide insertions between, and deletions from, genes. We use large sequence repertoires of the variable CDR3 region of human CD4+ T-cell receptor beta chains to infer the statistical properties of these basic biochemical events. Because any given CDR3 sequence can be produced in multiple ways, the probability distribution of hidden recombination events cannot be inferred directly from the observed sequences; we therefore develop a maximum likelihood inference method to achieve this end. To separate the properties of the molecular rearrangement mechanism from the effects of selection, we focus on nonproductive CDR3 sequences in T-cell DNA. We infer the joint distribution of the various generative events that occur when a new T-cell receptor gene is created. We find a rich picture of correlation (and absence thereof), providing insight into the molecular mechanisms involved. The generative event statistics are consistent between individuals, suggesting a universal biochemical process. Our probabilistic model predicts the generation probability of any specific CDR3 sequence by the primitive recombination process, allowing us to quantify the potential diversity of the T-cell repertoire and to understand why some sequences are shared between individuals. We argue that the use of formal statistical inference methods, of the kind presented in this paper, will be essential for quantitative understanding of the generation and evolution of diversity in the adaptive immune system.

  17. A Streamflow Statistics (StreamStats) Web Application for Ohio

    USGS Publications Warehouse

    Koltun, G.F.; Kula, Stephanie P.; Puskas, Barry M.

    2006-01-01

    A StreamStats Web application was developed for Ohio that implements equations for estimating a variety of streamflow statistics including the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year peak streamflows, mean annual streamflow, mean monthly streamflows, harmonic mean streamflow, and 25th-, 50th-, and 75th-percentile streamflows. StreamStats is a Web-based geographic information system application designed to facilitate the estimation of streamflow statistics at ungaged locations on streams. StreamStats can also serve precomputed streamflow statistics determined from streamflow-gaging station data. The basic structure, use, and limitations of StreamStats are described in this report. To facilitate the level of automation required for Ohio's StreamStats application, the technique used by Koltun (2003)1 for computing main-channel slope was replaced with a new computationally robust technique. The new channel-slope characteristic, referred to as SL10-85, differed from the National Hydrography Data based channel slope values (SL) reported by Koltun (2003)1 by an average of -28.3 percent, with the median change being -13.2 percent. In spite of the differences, the two slope measures are strongly correlated. The change in channel slope values resulting from the change in computational method necessitated revision of the full-model equations for flood-peak discharges originally presented by Koltun (2003)1. Average standard errors of prediction for the revised full-model equations presented in this report increased by a small amount over those reported by Koltun (2003)1, with increases ranging from 0.7 to 0.9 percent. Mean percentage changes in the revised regression and weighted flood-frequency estimates relative to regression and weighted estimates reported by Koltun (2003)1 were small, ranging from -0.72 to -0.25 percent and -0.22 to 0.07 percent, respectively.

  18. Constructing Space-Time Views from Fixed Size Statistical Data: Getting the Best of both Worlds

    NASA Technical Reports Server (NTRS)

    Schmidt, Melisa; Yan, Jerry C.

    1997-01-01

    Many performance monitoring tools are currently available to the super-computing community. The performance data gathered and analyzed by these tools fall under two categories: statistics and event traces. Statistical data is much more compact but lacks the probative power event traces offer. Event traces, on the other hand, can easily fill up the entire file system during execution such that the instrumented execution may have to be terminated half way through. In this paper, we propose an innovative methodology for performance data gathering and representation that offers a middle ground. The user can trade-off tracing overhead, trace data size vs. data quality incrementally. In other words, the user will be able to limit the amount of trace collected and, at the same time, carry out some of the analysis event traces offer using space-time views for the entire execution. Two basic ideas arc employed: the use of averages to replace recording data for each instance and formulae to represent sequences associated with communication and control flow. With the help of a few simple examples, we illustrate the use of these techniques in performance tuning and compare the quality of the traces we collected vs. event traces. We found that the trace files thus obtained are, in deed, small, bounded and predictable before program execution and that the quality of the space time views generated from these statistical data are excellent. Furthermore, experimental results showed that the formulae proposed were able to capture 100% of all the sequences associated with 11 of the 15 applications tested. The performance of the formulae can be incrementally improved by allocating more memory at run-time to learn longer sequences.

  19. Constructing Space-Time Views from Fixed Size Statistical Data: Getting the Best of Both Worlds

    NASA Technical Reports Server (NTRS)

    Schmidt, Melisa; Yan, Jerry C.; Bailey, David (Technical Monitor)

    1996-01-01

    Many performance monitoring tools are currently available to the super-computing community. The performance data gathered and analyzed by these tools fall under two categories: statistics and event traces. Statistical data is much more compact but lacks the probative power event traces offer. Event traces, on the other hand, can easily fill up the entire file system during execution such that the instrumented execution may have to be terminated half way through. In this paper, we propose an innovative methodology for performance data gathering and representation that offers a middle ground. The user can trade-off tracing overhead, trace data size vs. data quality incrementally. In other words, the user will be able to limit the amount of trace collected and, at the same time, carry out some of the analysis event traces offer using spacetime views for the entire execution. Two basic ideas are employed: the use of averages to replace recording data for each instance and "formulae" to represent sequences associated with communication and control flow. With the help of a few simple examples, we illustrate the use of these techniques in performance tuning and compare the quality of the traces we collected vs. event traces. We found that the trace files thus obtained are, in deed, small, bounded and predictable before program execution and that the quality of the space time views generated from these statistical data are excellent. Furthermore, experimental results showed that the formulae proposed were able to capture 100% of all the sequences associated with 11 of the 15 applications tested. The performance of the formulae can be incrementally improved by allocating more memory at run-time to learn longer sequences.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apte, A; Veeraraghavan, H; Oh, J

    Purpose: To present an open source and free platform to facilitate radiomics research — The “Radiomics toolbox” in CERR. Method: There is scarcity of open source tools that support end-to-end modeling of image features to predict patient outcomes. The “Radiomics toolbox” strives to fill the need for such a software platform. The platform supports (1) import of various kinds of image modalities like CT, PET, MR, SPECT, US. (2) Contouring tools to delineate structures of interest. (3) Extraction and storage of image based features like 1st order statistics, gray-scale co-occurrence and zonesize matrix based texture features and shape features andmore » (4) Statistical Analysis. Statistical analysis of the extracted features is supported with basic functionality that includes univariate correlations, Kaplan-Meir curves and advanced functionality that includes feature reduction and multivariate modeling. The graphical user interface and the data management are performed with Matlab for the ease of development and readability of code and features for wide audience. Open-source software developed with other programming languages is integrated to enhance various components of this toolbox. For example: Java-based DCM4CHE for import of DICOM, R for statistical analysis. Results: The Radiomics toolbox will be distributed as an open source, GNU copyrighted software. The toolbox was prototyped for modeling Oropharyngeal PET dataset at MSKCC. The analysis will be presented in a separate paper. Conclusion: The Radiomics Toolbox provides an extensible platform for extracting and modeling image features. To emphasize new uses of CERR for radiomics and image-based research, we have changed the name from the “Computational Environment for Radiotherapy Research” to the “Computational Environment for Radiological Research”.« less

  1. Comparing and combining process-based crop models and statistical models with some implications for climate change

    NASA Astrophysics Data System (ADS)

    Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram

    2017-09-01

    We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.

  2. Statistical/Documentary Report, 1974 and 1975 Assessments of 17-Year-Old Students, Summary Volume; Functional Literacy Basic Reading Performance.

    ERIC Educational Resources Information Center

    Gadway, Charles J.; Wilson, H.A.

    This document provides statistical data on the 1974 and 1975 Mini-Assessment of Functional Literacy, which was designed to determine the extent of functional literacy among seventeen year olds in America. Also presented are data from comparable test items from the 1971 assessment. Three standards are presented, to allow different methods of…

  3. Effects of an Instructional Gaming Characteristic on Learning Effectiveness, Efficiency, and Engagement: Using a Storyline for Teaching Basic Statistical Skills

    ERIC Educational Resources Information Center

    Novak, Elena; Johnson, Tristan E.; Tenenbaum, Gershon; Shute, Valerie J.

    2016-01-01

    The study explored instructional benefits of a storyline gaming characteristic (GC) on learning effectiveness, efficiency, and engagement with the use of an online instructional simulation for graduate students in an introductory statistics course. A storyline is a game-design element that connects scenes with the educational content. In order to…

  4. Examining Agreement and Longitudinal Stability among Traditional and RTI-Based Definitions of Reading Disability Using the Affected-Status Agreement Statistic

    ERIC Educational Resources Information Center

    Waesche, Jessica S. Brown; Schatschneider, Christopher; Maner, Jon K.; Ahmed, Yusra; Wagner, Richard K.

    2011-01-01

    Rates of agreement among alternative definitions of reading disability and their 1- and 2-year stabilities were examined using a new measure of agreement, the affected-status agreement statistic. Participants were 288,114 first through third grade students. Reading measures were "Dynamic Indicators of Basic Early Literacy Skills" Oral…

  5. Elementary Preservice Teachers' Reasoning about Modeling a "Family Factory" with TinkerPlots--A Pilot Study

    ERIC Educational Resources Information Center

    Biehler, Rolf; Frischemeier, Daniel; Podworny, Susanne

    2017-01-01

    Connecting data and chance is fundamental in statistics curricula. The use of software like TinkerPlots can bridge both worlds because the TinkerPlots Sampler supports learners in expressive modeling. We conducted a study with elementary preservice teachers with a basic university education in statistics. They were asked to set up and evaluate…

  6. Fieldcrest Cannon, Inc. Advanced Technical Preparation. Statistical Process Control (SPC). PRE-SPC 11: SPC & Graphs. Instructor Book.

    ERIC Educational Resources Information Center

    Averitt, Sallie D.

    This instructor guide, which was developed for use in a manufacturing firm's advanced technical preparation program, contains the materials required to present a learning module that is designed to prepare trainees for the program's statistical process control module by improving their basic math skills in working with line graphs and teaching…

  7. Effects of an Instructional Gaming Characteristic on Learning Effectiveness, Efficiency, and Engagement: Using a Storyline to Teach Basic Statistical Analytical Skills

    ERIC Educational Resources Information Center

    Novak, Elena

    2012-01-01

    The study explored instructional benefits of a storyline gaming characteristic (GC) on learning effectiveness, efficiency, and engagement with the use of an online instructional simulation for graduate students in an introductory statistics course. In addition, the study focused on examining the effects of a storyline GC on specific learning…

  8. A statistical mechanics approach to autopoietic immune networks

    NASA Astrophysics Data System (ADS)

    Barra, Adriano; Agliari, Elena

    2010-07-01

    In this work we aim to bridge theoretical immunology and disordered statistical mechanics. We introduce a model for the behavior of B-cells which naturally merges the clonal selection theory and the autopoietic network theory as a whole. From the analysis of its features we recover several basic phenomena such as low-dose tolerance, dynamical memory of antigens and self/non-self discrimination.

  9. Guidelines 13 and 14—Prediction uncertainty

    USGS Publications Warehouse

    Hill, Mary C.; Tiedeman, Claire

    2005-01-01

    An advantage of using optimization for model development and calibration is that optimization provides methods for evaluating and quantifying prediction uncertainty. Both deterministic and statistical methods can be used. Guideline 13 discusses using regression and post-audits, which we classify as deterministic methods. Guideline 14 discusses inferential statistics and Monte Carlo methods, which we classify as statistical methods.

  10. Applying Precision Medicine to Trial Design Using Physiology. Extracorporeal CO2 Removal for Acute Respiratory Distress Syndrome.

    PubMed

    Goligher, Ewan C; Amato, Marcelo B P; Slutsky, Arthur S

    2017-09-01

    In clinical trials of therapies for acute respiratory distress syndrome (ARDS), the average treatment effect in the study population may be attenuated because individual patient responses vary widely. This inflates sample size requirements and increases the cost and difficulty of conducting successful clinical trials. One solution is to enrich the study population with patients most likely to benefit, based on predicted patient response to treatment (predictive enrichment). In this perspective, we apply the precision medicine paradigm to the emerging use of extracorporeal CO 2 removal (ECCO 2 R) for ultraprotective ventilation in ARDS. ECCO 2 R enables reductions in tidal volume and driving pressure, key determinants of ventilator-induced lung injury. Using basic physiological concepts, we demonstrate that dead space and static compliance determine the effect of ECCO 2 R on driving pressure and mechanical power. This framework might enable prediction of individual treatment responses to ECCO 2 R. Enriching clinical trials by selectively enrolling patients with a significant predicted treatment response can increase treatment effect size and statistical power more efficiently than conventional enrichment strategies that restrict enrollment according to the baseline risk of death. To support this claim, we simulated the predicted effect of ECCO 2 R on driving pressure and mortality in a preexisting cohort of patients with ARDS. Our computations suggest that restricting enrollment to patients in whom ECCO 2 R allows driving pressure to be decreased by 5 cm H 2 O or more can reduce sample size requirement by more than 50% without increasing the total number of patients to be screened. We discuss potential implications for trial design based on this framework.

  11. Evaluation of recently validated non-invasive formula using basic lung functions as new screening tool for pulmonary hypertension in idiopathic pulmonary fibrosis patients

    PubMed Central

    Ghanem, Maha K.; Makhlouf, Hoda A.; Agmy, Gamal R.; Imam, Hisham M. K.; Fouad, Doaa A.

    2009-01-01

    BACKGROUND: A prediction formula for mean pulmonary artery pressure (MPAP) using standard lung function measurement has been recently validated to screen for pulmonary hypertension (PH) in idiopathic pulmonary fibrosis (IPF) patients. OBJECTIVE: To test the usefulness of this formula as a new non invasive screening tool for PH in IPF patients. Also, to study its correlation with patients' clinical data, pulmonary function tests, arterial blood gases (ABGs) and other commonly used screening methods for PH including electrocardiogram (ECG), chest X ray (CXR), trans-thoracic echocardiography (TTE) and computerized tomography pulmonary angiography (CTPA). MATERIALS AND METHODS: Cross-sectional study of 37 IPF patients from tertiary hospital. The accuracy of MPAP estimation was assessed by examining the correlation between the predicted MPAP using the formula and PH diagnosed by other screening tools and patients' clinical signs of PH. RESULTS: There was no statistically significant difference in the prediction of PH using cut off point of 21 or 25 mm Hg (P = 0.24). The formula-predicted MPAP greater than 25 mm Hg strongly correlated in the expected direction with O2 saturation (r = −0.95, P < 0.000), partial arterial O2 tension (r = −0.71, P < 0.000), right ventricular systolic pressure measured by TTE (r = 0.6, P < 0.000) and hilar width on CXR (r = 0.31, P = 0.03). Chest symptoms, ECG and CTPA signs of PH poorly correlated with the same formula (P > 0.05). CONCLUSIONS: The prediction formula for MPAP using standard lung function measurements is a simple non invasive tool that can be used as TTE to screen for PH in IPF patients and select those who need right heart catheterization. PMID:19881164

  12. External validation of the diffuse intrinsic pontine glioma survival prediction model: a collaborative report from the International DIPG Registry and the SIOPE DIPG Registry.

    PubMed

    Veldhuijzen van Zanten, Sophie E M; Lane, Adam; Heymans, Martijn W; Baugh, Joshua; Chaney, Brooklyn; Hoffman, Lindsey M; Doughman, Renee; Jansen, Marc H A; Sanchez, Esther; Vandertop, William P; Kaspers, Gertjan J L; van Vuurden, Dannis G; Fouladi, Maryam; Jones, Blaise V; Leach, James

    2017-08-01

    We aimed to perform external validation of the recently developed survival prediction model for diffuse intrinsic pontine glioma (DIPG), and discuss its utility. The DIPG survival prediction model was developed in a cohort of patients from the Netherlands, United Kingdom and Germany, registered in the SIOPE DIPG Registry, and includes age <3 years, longer symptom duration and receipt of chemotherapy as favorable predictors, and presence of ring-enhancement on MRI as unfavorable predictor. Model performance was evaluated by analyzing the discrimination and calibration abilities. External validation was performed using an unselected cohort from the International DIPG Registry, including patients from United States, Canada, Australia and New Zealand. Basic comparison with the results of the original study was performed using descriptive statistics, and univariate- and multivariable regression analyses in the validation cohort. External validation was assessed following a variety of analyses described previously. Baseline patient characteristics and results from the regression analyses were largely comparable. Kaplan-Meier curves of the validation cohort reproduced separated groups of standard (n = 39), intermediate (n = 125), and high-risk (n = 78) patients. This discriminative ability was confirmed by similar values for the hazard ratios across these risk groups. The calibration curve in the validation cohort showed a symmetric underestimation of the predicted survival probabilities. In this external validation study, we demonstrate that the DIPG survival prediction model has acceptable cross-cohort calibration and is able to discriminate patients with short, average, and increased survival. We discuss how this clinico-radiological model may serve a useful role in current clinical practice.

  13. Development of computer-assisted instruction application for statistical data analysis android platform as learning resource

    NASA Astrophysics Data System (ADS)

    Hendikawati, P.; Arifudin, R.; Zahid, M. Z.

    2018-03-01

    This study aims to design an android Statistics Data Analysis application that can be accessed through mobile devices to making it easier for users to access. The Statistics Data Analysis application includes various topics of basic statistical along with a parametric statistics data analysis application. The output of this application system is parametric statistics data analysis that can be used for students, lecturers, and users who need the results of statistical calculations quickly and easily understood. Android application development is created using Java programming language. The server programming language uses PHP with the Code Igniter framework, and the database used MySQL. The system development methodology used is the Waterfall methodology with the stages of analysis, design, coding, testing, and implementation and system maintenance. This statistical data analysis application is expected to support statistical lecturing activities and make students easier to understand the statistical analysis of mobile devices.

  14. Predicting the payload capability of cable logging systems including the effect of partial suspension

    Treesearch

    Gary D. Falk

    1981-01-01

    A systematic procedure for predicting the payload capability of running, live, and standing skylines is presented. Three hand-held calculator programs are used to predict payload capability that includes the effect of partial suspension. The programs allow for predictions for downhill yarding and for yarding away from the yarder. The equations and basic principles...

  15. Posttraumatic growth in people with traumatic long-term spinal cord injury: predictive role of basic hope and coping.

    PubMed

    Byra, S

    2016-06-01

    Participants with spinal cord injury (SCI) sustained at least 15 years before the study completed questionnaires measuring posttraumatic growth (PTG), basic hope and coping strategies. To determine contribution of basic hope and coping strategies to accounting for PTG variability in participants with traumatic long-term SCI. Polish rehabilitation centres, foundations and associations implementing social inclusion and professional activation programmes. Participants were enrolled based on their medical history by trained rehabilitation specialists and psychologists. The set of questionnaires included the following: The Post-traumatic Growth Inventory; The Coping Orientations to Problems Experienced (COPE); and Basic Hope Inventory. A study of 169 individuals with paraplegia in the range of PTG showed the highest degree of positive changes in appreciation of life (AL) and the lowest in self-perception. Regression analysis showed that coping strategies such as religion (REL), focus on the problem, humour, alcohol/drug use ideation and basic hope jointly account for 60% of variance of PTG. The highest contribution to accounting for this variability had REL. Also, it was found that coping strategies and basic hope allow to predict variance of individual growth aspects. Age at trauma exposure positively correlated with changes in AL and spiritual change. No significant relationship between growth and age of participants was confirmed. PTG occurring in people with long-term traumatic SCI is primarily manifested in increased AL. Specific coping strategies and basic hope have a significant role in fostering positive changes.

  16. Tailoring a psychophysical discrimination experiment upon assessment of the psychometric function: Predictions and results

    NASA Astrophysics Data System (ADS)

    Vilardi, Andrea; Tabarelli, Davide; Ricci, Leonardo

    2015-02-01

    Decision making is a widespread research topic and plays a crucial role in neuroscience as well as in other research and application fields of, for example, biology, medicine and economics. The most basic implementation of decision making, namely binary discrimination, is successfully interpreted by means of signal detection theory (SDT), a statistical model that is deeply linked to physics. An additional, widespread tool to investigate discrimination ability is the psychometric function, which measures the probability of a given response as a function of the magnitude of a physical quantity underlying the stimulus. However, the link between psychometric functions and binary discrimination experiments is often neglected or misinterpreted. Aim of the present paper is to provide a detailed description of an experimental investigation on a prototypical discrimination task and to discuss the results in terms of SDT. To this purpose, we provide an outline of the theory and describe the implementation of two behavioural experiments in the visual modality: upon the assessment of the so-called psychometric function, we show how to tailor a binary discrimination experiment on performance and decisional bias, and to measure these quantities on a statistical base. Attention is devoted to the evaluation of uncertainties, an aspect which is also often overlooked in the scientific literature.

  17. Predicting Cell Association of Surface-Modified Nanoparticles Using Protein Corona Structure - Activity Relationships (PCSAR).

    PubMed

    Kamath, Padmaja; Fernandez, Alberto; Giralt, Francesc; Rallo, Robert

    2015-01-01

    Nanoparticles are likely to interact in real-case application scenarios with mixtures of proteins and biomolecules that will absorb onto their surface forming the so-called protein corona. Information related to the composition of the protein corona and net cell association was collected from literature for a library of surface-modified gold and silver nanoparticles. For each protein in the corona, sequence information was extracted and used to calculate physicochemical properties and statistical descriptors. Data cleaning and preprocessing techniques including statistical analysis and feature selection methods were applied to remove highly correlated, redundant and non-significant features. A weighting technique was applied to construct specific signatures that represent the corona composition for each nanoparticle. Using this basic set of protein descriptors, a new Protein Corona Structure-Activity Relationship (PCSAR) that relates net cell association with the physicochemical descriptors of the proteins that form the corona was developed and validated. The features that resulted from the feature selection were in line with already published literature, and the computational model constructed on these features had a good accuracy (R(2)LOO=0.76 and R(2)LMO(25%)=0.72) and stability, with the advantage that the fingerprints based on physicochemical descriptors were independent of the specific proteins that form the corona.

  18. Basic Confidence Predictors of Career Decision-Making Self-Efficacy

    ERIC Educational Resources Information Center

    Paulsen, Alisa M.; Betz, Nancy E.

    2004-01-01

    The extent to which Basic Confidence Scales predicted career decision-making self-efficacy was studied in a sample of 627 undergraduate students. Six confidence variables accounted for 49% of the variance in career decision-making self-efficacy. Leadership confidence was the most important, but confidence in science, mathematics, writing, using…

  19. Investigating Complexity Using Excel and Visual Basic.

    ERIC Educational Resources Information Center

    Zetie, K. P.

    2001-01-01

    Shows how some of the simple ideas in complexity can be investigated using a spreadsheet and a macro written in Visual Basic. Shows how the sandpile model of Bak, Chao, and Wiesenfeld can be simulated and animated. The model produces results that cannot easily be predicted from its properties. (Author/MM)

  20. Statistical Mining of Predictability of Seasonal Precipitation over the United States

    NASA Technical Reports Server (NTRS)

    Lau, William K. M.; Kim, Kyu-Myong; Shen, S. P.

    2001-01-01

    Results from a new ensemble canonical correlation (ECC) prediction model yield a remarkable (10-20%) increases in baseline prediction skills for seasonal precipitation over the US for all seasons, compared to traditional statistical predictions. While the tropical Pacific, i.e., El Nino, contributes to the largest share of potential predictability in the southern tier States during boreal winter, the North Pacific and the North Atlantic are responsible for enhanced predictability in the northern Great Plains, Midwest and the southwest US during boreal summer. Most importantly, ECC significantly reduces the spring predictability barrier over the conterminous US, thereby raising the skill bar for dynamical predictions.

  1. Predicting Subsequent Myopia in Initially Pilot-Qualified USAFA Cadets.

    DTIC Science & Technology

    1985-12-27

    Refraction Measurement 14 Accesion For . 4.0 RESULTS NTIS CRA&I 15 4.1 Descriptive Statistics DTIC TAB 0 15i ~ ~Unannoutwced [ 4.2 Predictive Statistics ...mentioned), and three were missing a status. The data of the subject who was commissionable were dropped from the statistical analyses. Of the 91...relatively equal numbers of participants from all classes will become obvious ’’" - within the results. J 4.1 Descriptive Statistics In the original plan

  2. Tertiary alphabet for the observable protein structural universe.

    PubMed

    Mackenzie, Craig O; Zhou, Jianfu; Grigoryan, Gevorg

    2016-11-22

    Here, we systematically decompose the known protein structural universe into its basic elements, which we dub tertiary structural motifs (TERMs). A TERM is a compact backbone fragment that captures the secondary, tertiary, and quaternary environments around a given residue, comprising one or more disjoint segments (three on average). We seek the set of universal TERMs that capture all structure in the Protein Data Bank (PDB), finding remarkable degeneracy. Only ∼600 TERMs are sufficient to describe 50% of the PDB at sub-Angstrom resolution. However, more rare geometries also exist, and the overall structural coverage grows logarithmically with the number of TERMs. We go on to show that universal TERMs provide an effective mapping between sequence and structure. We demonstrate that TERM-based statistics alone are sufficient to recapitulate close-to-native sequences given either NMR or X-ray backbones. Furthermore, sequence variability predicted from TERM data agrees closely with evolutionary variation. Finally, locations of TERMs in protein chains can be predicted from sequence alone based on sequence signatures emergent from TERM instances in the PDB. For multisegment motifs, this method identifies spatially adjacent fragments that are not contiguous in sequence-a major bottleneck in structure prediction. Although all TERMs recur in diverse proteins, some appear specialized for certain functions, such as interface formation, metal coordination, or even water binding. Structural biology has benefited greatly from previously observed degeneracies in structure. The decomposition of the known structural universe into a finite set of compact TERMs offers exciting opportunities toward better understanding, design, and prediction of protein structure.

  3. Tertiary alphabet for the observable protein structural universe

    PubMed Central

    Mackenzie, Craig O.; Zhou, Jianfu; Grigoryan, Gevorg

    2016-01-01

    Here, we systematically decompose the known protein structural universe into its basic elements, which we dub tertiary structural motifs (TERMs). A TERM is a compact backbone fragment that captures the secondary, tertiary, and quaternary environments around a given residue, comprising one or more disjoint segments (three on average). We seek the set of universal TERMs that capture all structure in the Protein Data Bank (PDB), finding remarkable degeneracy. Only ∼600 TERMs are sufficient to describe 50% of the PDB at sub-Angstrom resolution. However, more rare geometries also exist, and the overall structural coverage grows logarithmically with the number of TERMs. We go on to show that universal TERMs provide an effective mapping between sequence and structure. We demonstrate that TERM-based statistics alone are sufficient to recapitulate close-to-native sequences given either NMR or X-ray backbones. Furthermore, sequence variability predicted from TERM data agrees closely with evolutionary variation. Finally, locations of TERMs in protein chains can be predicted from sequence alone based on sequence signatures emergent from TERM instances in the PDB. For multisegment motifs, this method identifies spatially adjacent fragments that are not contiguous in sequence—a major bottleneck in structure prediction. Although all TERMs recur in diverse proteins, some appear specialized for certain functions, such as interface formation, metal coordination, or even water binding. Structural biology has benefited greatly from previously observed degeneracies in structure. The decomposition of the known structural universe into a finite set of compact TERMs offers exciting opportunities toward better understanding, design, and prediction of protein structure. PMID:27810958

  4. Seismo-induced effects in the near-earth space: Combined ground and space investigations as a contribution to earthquake prediction

    NASA Astrophysics Data System (ADS)

    Sgrigna, V.; Buzzi, A.; Conti, L.; Picozza, P.; Stagni, C.; Zilpimiani, D.

    2007-02-01

    The paper aims at giving a few methodological suggestions in deterministic earthquake prediction studies based on combined ground-based and space observations of earthquake precursors. Up to now what is lacking is the demonstration of a causal relationship with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. Coordinated space and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of LEO satellites. At this purpose a new result reported in the paper is an original and specific space mission project (ESPERIA) and two instruments of its payload. The ESPERIA space project has been performed for the Italian Space Agency and three ESPERIA instruments (ARINA and LAZIO particle detectors, and EGLE search-coil magnetometer) have been built and tested in space. The EGLE experiment started last April 15, 2005 on board the ISS, within the ENEIDE mission. The launch of ARINA occurred on June 15, 2006, on board the RESURS DK-1 Russian LEO satellite. As an introduction and justification to these experiments the paper clarifies some basic concepts and critical methodological aspects concerning deterministic and statistic approaches and their use in earthquake prediction. We also take the liberty of giving the scientific community a few critical hints based on our personal experience in the field and propose a joint study devoted to earthquake prediction and warning.

  5. Which Preschool Mathematics Competencies Are Most Predictive of Fifth Grade Achievement?

    PubMed

    Nguyen, Tutrang; Watts, Tyler W; Duncan, Greg J; Clements, Douglas H; Sarama, Julie S; Wolfe, Christopher; Spitler, Mary Elaine

    In an effort to promote best practices regarding mathematics teaching and learning at the preschool level, national advisory panels and organizations have emphasized the importance of children's emergent counting and related competencies, such as the ability to verbally count, maintain one-to-one correspondence, count with cardinality, subitize, and count forward or backward from a given number. However, little research has investigated whether the kind of mathematical knowledge promoted by the various standards documents actually predict later mathematics achievement. The present study uses longitudinal data from a primarily low-income and minority sample of children to examine the extent to which preschool mathematical competencies, specifically basic and advanced counting, predict fifth grade mathematics achievement. Using regression analyses, we find early numeracy abilities to be the strongest predictors of later mathematics achievement, with advanced counting competencies more predictive than basic counting competencies. Our results highlight the significance of preschool mathematics knowledge for future academic achievement.

  6. Facts about Congenital Heart Defects

    MedlinePlus

    ... Living With Heart Defects Data & Statistics Tracking & Research Articles & Key Findings Free Materials Multimedia and Tools Links to Other Websites Information For… Media Policy Makers Basics about Congenital Heart Defects Language: ...

  7. Symptoms of Uterine Cancer

    MedlinePlus

    ... Cervical Cancer with the Right Test at the Right Time” Infographic How Is Cervical Cancer Diagnosed and Treated? Statistics Related Links Ovarian Cancer Basic Information What Are the Risk Factors? What Can ...

  8. Seasonal drought predictability in Portugal using statistical-dynamical techniques

    NASA Astrophysics Data System (ADS)

    Ribeiro, A. F. S.; Pires, C. A. L.

    2016-08-01

    Atmospheric forecasting and predictability are important to promote adaption and mitigation measures in order to minimize drought impacts. This study estimates hybrid (statistical-dynamical) long-range forecasts of the regional drought index SPI (3-months) over homogeneous regions from mainland Portugal, based on forecasts from the UKMO operational forecasting system, with lead-times up to 6 months. ERA-Interim reanalysis data is used for the purpose of building a set of SPI predictors integrating recent past information prior to the forecast launching. Then, the advantage of combining predictors with both dynamical and statistical background in the prediction of drought conditions at different lags is evaluated. A two-step hybridization procedure is performed, in which both forecasted and observed 500 hPa geopotential height fields are subjected to a PCA in order to use forecasted PCs and persistent PCs as predictors. A second hybridization step consists on a statistical/hybrid downscaling to the regional SPI, based on regression techniques, after the pre-selection of the statistically significant predictors. The SPI forecasts and the added value of combining dynamical and statistical methods are evaluated in cross-validation mode, using the R2 and binary event scores. Results are obtained for the four seasons and it was found that winter is the most predictable season, and that most of the predictive power is on the large-scale fields from past observations. The hybridization improves the downscaling based on the forecasted PCs, since they provide complementary information (though modest) beyond that of persistent PCs. These findings provide clues about the predictability of the SPI, particularly in Portugal, and may contribute to the predictability of crops yields and to some guidance on users (such as farmers) decision making process.

  9. Critical appraisal of scientific articles: part 1 of a series on evaluation of scientific publications.

    PubMed

    du Prel, Jean-Baptist; Röhrig, Bernd; Blettner, Maria

    2009-02-01

    In the era of evidence-based medicine, one of the most important skills a physician needs is the ability to analyze scientific literature critically. This is necessary to keep medical knowledge up to date and to ensure optimal patient care. The aim of this paper is to present an accessible introduction into critical appraisal of scientific articles. Using a selection of international literature, the reader is introduced to the principles of critical reading of scientific articles in medicine. For the sake of conciseness, detailed description of statistical methods is omitted. Widely accepted principles for critically appraising scientific articles are outlined. Basic knowledge of study design, structuring of an article, the role of different sections, of statistical presentations as well as sources of error and limitation are presented. The reader does not require extensive methodological knowledge. As far as necessary for critical appraisal of scientific articles, differences in research areas like epidemiology, clinical, and basic research are outlined. Further useful references are presented. Basic methodological knowledge is required to select and interpret scientific articles correctly.

  10. Statistical comparisons of AGDISP prediction with Mission III data

    Treesearch

    Baozhong Duan; Karl Mierzejewski; William G. Yendol

    1991-01-01

    Statistical comparison of AGDISP prediction were made against data obtained during aerial spray field trials ("Mission III") conducted in March 1987 at the APHIS Facility, Moore Air Base, Edinburg, Texas, by the NEFAAT group (Northeast Forest Aerial Application Technology). Seven out of twenty one runs were observed and predicted means (O and P), mean bias...

  11. Consequences of common data analysis inaccuracies in CNS trauma injury basic research.

    PubMed

    Burke, Darlene A; Whittemore, Scott R; Magnuson, David S K

    2013-05-15

    The development of successful treatments for humans after traumatic brain or spinal cord injuries (TBI and SCI, respectively) requires animal research. This effort can be hampered when promising experimental results cannot be replicated because of incorrect data analysis procedures. To identify and hopefully avoid these errors in future studies, the articles in seven journals with the highest number of basic science central nervous system TBI and SCI animal research studies published in 2010 (N=125 articles) were reviewed for their data analysis procedures. After identifying the most common statistical errors, the implications of those findings were demonstrated by reanalyzing previously published data from our laboratories using the identified inappropriate statistical procedures, then comparing the two sets of results. Overall, 70% of the articles contained at least one type of inappropriate statistical procedure. The highest percentage involved incorrect post hoc t-tests (56.4%), followed by inappropriate parametric statistics (analysis of variance and t-test; 37.6%). Repeated Measures analysis was inappropriately missing in 52.0% of all articles and, among those with behavioral assessments, 58% were analyzed incorrectly. Reanalysis of our published data using the most common inappropriate statistical procedures resulted in a 14.1% average increase in significant effects compared to the original results. Specifically, an increase of 15.5% occurred with Independent t-tests and 11.1% after incorrect post hoc t-tests. Utilizing proper statistical procedures can allow more-definitive conclusions, facilitate replicability of research results, and enable more accurate translation of those results to the clinic.

  12. Difficult Decisions: A Qualitative Exploration of the Statistical Decision Making Process from the Perspectives of Psychology Students and Academics

    PubMed Central

    Allen, Peter J.; Dorozenko, Kate P.; Roberts, Lynne D.

    2016-01-01

    Quantitative research methods are essential to the development of professional competence in psychology. They are also an area of weakness for many students. In particular, students are known to struggle with the skill of selecting quantitative analytical strategies appropriate for common research questions, hypotheses and data types. To begin understanding this apparent deficit, we presented nine psychology undergraduates (who had all completed at least one quantitative methods course) with brief research vignettes, and asked them to explicate the process they would follow to identify an appropriate statistical technique for each. Thematic analysis revealed that all participants found this task challenging, and even those who had completed several research methods courses struggled to articulate how they would approach the vignettes on more than a very superficial and intuitive level. While some students recognized that there is a systematic decision making process that can be followed, none could describe it clearly or completely. We then presented the same vignettes to 10 psychology academics with particular expertise in conducting research and/or research methods instruction. Predictably, these “experts” were able to describe a far more systematic, comprehensive, flexible, and nuanced approach to statistical decision making, which begins early in the research process, and pays consideration to multiple contextual factors. They were sensitive to the challenges that students experience when making statistical decisions, which they attributed partially to how research methods and statistics are commonly taught. This sensitivity was reflected in their pedagogic practices. When asked to consider the format and features of an aid that could facilitate the statistical decision making process, both groups expressed a preference for an accessible, comprehensive and reputable resource that follows a basic decision tree logic. For the academics in particular, this aid should function as a teaching tool, which engages the user with each choice-point in the decision making process, rather than simply providing an “answer.” Based on these findings, we offer suggestions for tools and strategies that could be deployed in the research methods classroom to facilitate and strengthen students' statistical decision making abilities. PMID:26909064

  13. Difficult Decisions: A Qualitative Exploration of the Statistical Decision Making Process from the Perspectives of Psychology Students and Academics.

    PubMed

    Allen, Peter J; Dorozenko, Kate P; Roberts, Lynne D

    2016-01-01

    Quantitative research methods are essential to the development of professional competence in psychology. They are also an area of weakness for many students. In particular, students are known to struggle with the skill of selecting quantitative analytical strategies appropriate for common research questions, hypotheses and data types. To begin understanding this apparent deficit, we presented nine psychology undergraduates (who had all completed at least one quantitative methods course) with brief research vignettes, and asked them to explicate the process they would follow to identify an appropriate statistical technique for each. Thematic analysis revealed that all participants found this task challenging, and even those who had completed several research methods courses struggled to articulate how they would approach the vignettes on more than a very superficial and intuitive level. While some students recognized that there is a systematic decision making process that can be followed, none could describe it clearly or completely. We then presented the same vignettes to 10 psychology academics with particular expertise in conducting research and/or research methods instruction. Predictably, these "experts" were able to describe a far more systematic, comprehensive, flexible, and nuanced approach to statistical decision making, which begins early in the research process, and pays consideration to multiple contextual factors. They were sensitive to the challenges that students experience when making statistical decisions, which they attributed partially to how research methods and statistics are commonly taught. This sensitivity was reflected in their pedagogic practices. When asked to consider the format and features of an aid that could facilitate the statistical decision making process, both groups expressed a preference for an accessible, comprehensive and reputable resource that follows a basic decision tree logic. For the academics in particular, this aid should function as a teaching tool, which engages the user with each choice-point in the decision making process, rather than simply providing an "answer." Based on these findings, we offer suggestions for tools and strategies that could be deployed in the research methods classroom to facilitate and strengthen students' statistical decision making abilities.

  14. Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.

    PubMed

    Durdu, Omer Faruk

    2010-10-01

    In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic statistics of observed data in terms of mean. The ARIMA modeling approach is recommended for predicting boron concentration series of a river.

  15. Future missions studies: Combining Schatten's solar activity prediction model with a chaotic prediction model

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.

    1991-01-01

    K. Schatten (1991) recently developed a method for combining his prediction model with our chaotic model. The philosophy behind this combined model and his method of combination is explained. Because the Schatten solar prediction model (KS) uses a dynamo to mimic solar dynamics, accurate prediction is limited to long-term solar behavior (10 to 20 years). The Chaotic prediction model (SA) uses the recently developed techniques of nonlinear dynamics to predict solar activity. It can be used to predict activity only up to the horizon. In theory, the chaotic prediction should be several orders of magnitude better than statistical predictions up to that horizon; beyond the horizon, chaotic predictions would theoretically be just as good as statistical predictions. Therefore, chaos theory puts a fundamental limit on predictability.

  16. A Hierarchical Multivariate Bayesian Approach to Ensemble Model output Statistics in Atmospheric Prediction

    DTIC Science & Technology

    2017-09-01

    efficacy of statistical post-processing methods downstream of these dynamical model components with a hierarchical multivariate Bayesian approach to...Bayesian hierarchical modeling, Markov chain Monte Carlo methods , Metropolis algorithm, machine learning, atmospheric prediction 15. NUMBER OF PAGES...scale processes. However, this dissertation explores the efficacy of statistical post-processing methods downstream of these dynamical model components

  17. BehavePlus fire modeling system: Past, present, and future

    Treesearch

    Patricia L. Andrews

    2007-01-01

    Use of mathematical fire models to predict fire behavior and fire effects plays an important supporting role in wildland fire management. When used in conjunction with personal fire experience and a basic understanding of the fire models, predictions can be successfully applied to a range of fire management activities including wildfire behavior prediction, prescribed...

  18. BOOK REVIEW: Critical Phenomena in Natural Sciences: Chaos, Fractals, Selforganization and Disorder: Concepts and Tools

    NASA Astrophysics Data System (ADS)

    Franz, S.

    2004-10-01

    Since the discovery of the renormalization group theory in statistical physics, the realm of applications of the concepts of scale invariance and criticality has pervaded several fields of natural and social sciences. This is the leitmotiv of Didier Sornette's book, who in Critical Phenomena in Natural Sciences reviews three decades of developments and applications of the concepts of criticality, scale invariance and power law behaviour from statistical physics, to earthquake prediction, ruptures, plate tectonics, modelling biological and economic systems and so on. This strongly interdisciplinary book addresses students and researchers in disciplines where concepts of criticality and scale invariance are appropriate: mainly geology from which most of the examples are taken, but also engineering, biology, medicine, economics, etc. A good preparation in quantitative science is assumed but the presentation of statistical physics principles, tools and models is self-contained, so that little background in this field is needed. The book is written in a simple informal style encouraging intuitive comprehension rather than stressing formal derivations. Together with the discussion of the main conceptual results of the discipline, great effort is devoted to providing applied scientists with the tools of data analysis and modelling necessary to analyse, understand, make predictions and simulate systems undergoing complex collective behaviour. The book starts from a purely descriptive approach, explaining basic probabilistic and geometrical tools to characterize power law behaviour and scale invariant sets. Probability theory is introduced by a detailed discussion of interpretative issues warning the reader on the use and misuse of probabilistic concepts when the emphasis is on prediction of low probability rare---and often catastrophic---events. Then, concepts that have proved useful in risk evaluation, extreme value statistics, large limit theorems for sums of independent variables with power law distribution, random walks, fractals and multifractal formalisms, etc, are discussed in an immediate and direct way so as to provide ready-to-use tools for analysing and representing power law behaviour in natural phenomena. The exposition then continues discussing the main developments, allowing the reader to understand theoretically and model strongly correlated behaviour. After a concise, but useful, introduction to the fundamentals of statistical physics a discussion of equilibrium critical phenomena and the renormalization group is proposed to the reader. With the centrality of the problem of non-equilibrium behaviour in mind, a discussion is devoted to tentative applications of the concept of temperature in the off-equilibrium context. Particular emphasis is given to the development of long range correlation and of precursors of phase transitions, and their role in the prediction of catastrophic events. Then, basic models such as percolation and rupture models are described. A central position in the book is occupied by a chapter on mechanisms for power laws and a subsequent one on self-organized criticality as a general paradigm for critical behaviour as proposed by P Bak and collaborators. The book concludes with a chapter on the prediction of fields generated by a random distribution of sources. The book maintains the promise of the title of providing concepts and tools to tackle criticality and self-organization. The second edition, while retaining the structure of the first edition, considerably extends the scope with new examples and applications of a research field which is constantly growing. Any scientific book has to solve the dichotomy between the depth of discussion, the pedagogical character of exposition and the quantity of material discussed. In general the book, which evolved from a graduate student course, favours these last two aspects at the expense of the first one. This makes the book very readable and means that, while complicated concepts are always explained by means of simple examples, important results are often mentioned but not derived or discussed in depth. Most of the time this style of exposition manages to successfully convey the essential information, other times unfortunately, e.g. in the case of the chapter on disordered systems, the presentation appears rather superficial. This is the price we pay for a book covering an impressively vast subject area and the huge bibliography (more than 1000 references) furnishes a necessary guide for acquiring the working knowledge of the subject covered. I would recommend it to teachers planning introductory courses on the field of complex systems and to researchers wanting to learn about an area of great contemporary interest.

  19. Microbial burden prediction model for unmanned planetary spacecraft

    NASA Technical Reports Server (NTRS)

    Hoffman, A. R.; Winterburn, D. A.

    1972-01-01

    The technical development of a computer program for predicting microbial burden on unmanned planetary spacecraft is outlined. The discussion includes the derivation of the basic analytical equations, the selection of a method for handling several random variables, the macrologic of the computer programs and the validation and verification of the model. The prediction model was developed to (1) supplement the biological assays of a spacecraft by simulating the microbial accretion during periods when assays are not taken; (2) minimize the necessity for a large number of microbiological assays; and (3) predict the microbial loading on a lander immediately prior to sterilization and other non-lander equipment prior to launch. It is shown that these purposes not only were achieved but also that the prediction results compare favorably to the estimates derived from the direct assays. The computer program can be applied not only as a prediction instrument but also as a management and control tool. The basic logic of the model is shown to have possible applicability to other sequential flow processes, such as food processing.

  20. Statistical learning and probabilistic prediction in music cognition: mechanisms of stylistic enculturation.

    PubMed

    Pearce, Marcus T

    2018-05-11

    Music perception depends on internal psychological models derived through exposure to a musical culture. It is hypothesized that this musical enculturation depends on two cognitive processes: (1) statistical learning, in which listeners acquire internal cognitive models of statistical regularities present in the music to which they are exposed; and (2) probabilistic prediction based on these learned models that enables listeners to organize and process their mental representations of music. To corroborate these hypotheses, I review research that uses a computational model of probabilistic prediction based on statistical learning (the information dynamics of music (IDyOM) model) to simulate data from empirical studies of human listeners. The results show that a broad range of psychological processes involved in music perception-expectation, emotion, memory, similarity, segmentation, and meter-can be understood in terms of a single, underlying process of probabilistic prediction using learned statistical models. Furthermore, IDyOM simulations of listeners from different musical cultures demonstrate that statistical learning can plausibly predict causal effects of differential cultural exposure to musical styles, providing a quantitative model of cultural distance. Understanding the neural basis of musical enculturation will benefit from close coordination between empirical neuroimaging and computational modeling of underlying mechanisms, as outlined here. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.

Top