Sample records for statistical confidence level

  1. Explorations in Statistics: Confidence Intervals

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…

  2. Interpretation of Confidence Interval Facing the Conflict

    ERIC Educational Resources Information Center

    Andrade, Luisa; Fernández, Felipe

    2016-01-01

    As literature has reported, it is usual that university students in statistics courses, and even statistics teachers, interpret the confidence level associated with a confidence interval as the probability that the parameter value will be between the lower and upper interval limits. To confront this misconception, class activities have been…

  3. Patients and medical statistics. Interest, confidence, and ability.

    PubMed

    Woloshin, Steven; Schwartz, Lisa M; Welch, H Gilbert

    2005-11-01

    People are increasingly presented with medical statistics. There are no existing measures to assess their level of interest or confidence in using medical statistics. To develop 2 new measures, the STAT-interest and STAT-confidence scales, and assess their reliability and validity. Survey with retest after approximately 2 weeks. Two hundred and twenty-four people were recruited from advertisements in local newspapers, an outpatient clinic waiting area, and a hospital open house. We developed and revised 5 items on interest in medical statistics and 3 on confidence understanding statistics. Study participants were mostly college graduates (52%); 25% had a high school education or less. The mean age was 53 (range 20 to 84) years. Most paid attention to medical statistics (6% paid no attention). The mean (SD) STAT-interest score was 68 (17) and ranged from 15 to 100. Confidence in using statistics was also high: the mean (SD) STAT-confidence score was 65 (19) and ranged from 11 to 100. STAT-interest and STAT-confidence scores were moderately correlated (r=.36, P<.001). Both scales demonstrated good test-retest repeatability (r=.60, .62, respectively), internal consistency reliability (Cronbach's alpha=0.70 and 0.78), and usability (individual item nonresponse ranged from 0% to 1.3%). Scale scores correlated only weakly with scores on a medical data interpretation test (r=.15 and .26, respectively). The STAT-interest and STAT-confidence scales are usable and reliable. Interest and confidence were only weakly related to the ability to actually use data.

  4. The Effect of a Surgical Skills Course on Confidence Levels of Rural General Practitioners: An Observational Study.

    PubMed

    Byrd, Pippa; Ward, Olga; Hamdorf, Jeffrey

    2016-10-01

    Objective  To investigate the effect of a short surgical skills course on general practitioners' confidence levels to perform procedural skills. Design  Prospective observational study. Setting  The Clinical Evaluation and Training Centre, a practical skills-based educational facility, at The University of Western Australia. Participants  Medical practitioners who participated in these courses. Nurses, physiotherapists, and medical students were excluded. The response rate was 61% with 61 participants providing 788 responses for pre- and postcourse confidence levels regarding various surgical skills. Intervention  One- to two-day surgical skills courses consisting of presentations, demonstrations, and practical stations, facilitated by specialists. Main Outcome Measures  A two-page precourse and postcourse questionnaire was administered to medical practitioners on the day. Participants rated their confidence levels to perform skills addressed during the course on a 4-point Likert scale. Results  Of the 788 responses regarding confidence levels, 621 were rated as improved postcourse, 163 were rated as no change, and 4 were rated as lower postcourse. Seven of the courses showed a 25% median increase in confidence levels, and one course demonstrated a 50% median increase. All courses showed statistically significant results ( p  < 0.001). Conclusion  A short surgical skills course resulted in a statistically significant improvement in the confidence levels of rural general practitioners to perform these skills.

  5. Confidence limits and sample size for determining nonhost status of fruits and vegetables to tephritid fruit flies as a quarantine measure.

    PubMed

    Follett, Peter A; Hennessey, Michael K

    2007-04-01

    Quarantine measures including treatments are applied to exported fruit and vegetable commodities to control regulatory fruit fly pests and to reduce the likelihood of their introduction into new areas. Nonhost status can be an effective measure used to achieve quarantine security. As with quarantine treatments, nonhost status can stand alone as a measure if there is high efficacy and statistical confidence. The numbers of insects or fruit tested during investigation of nonhost status will determine the level of statistical confidence. If the level of confidence of nonhost status is not high, then additional measures may be required to achieve quarantine security as part of a systems approach. Certain countries require that either 99.99 or 99.9968% mortality, as a measure of efficacy, at the 95% confidence level, be achieved by a quarantine treatment to meet quarantine security. This article outlines how the level of confidence in nonhost status can be quantified so that its equivalency to traditional quarantine treatments may be demonstrated. Incorporating sample size and confidence levels into host status testing protocols along with efficacy will lead to greater consistency by regulatory decision-makers in interpreting results and, therefore, to more technically sound decisions on host status.

  6. Patients and Medical Statistics

    PubMed Central

    Woloshin, Steven; Schwartz, Lisa M; Welch, H Gilbert

    2005-01-01

    BACKGROUND People are increasingly presented with medical statistics. There are no existing measures to assess their level of interest or confidence in using medical statistics. OBJECTIVE To develop 2 new measures, the STAT-interest and STAT-confidence scales, and assess their reliability and validity. DESIGN Survey with retest after approximately 2 weeks. SUBJECTS Two hundred and twenty-four people were recruited from advertisements in local newspapers, an outpatient clinic waiting area, and a hospital open house. MEASURES We developed and revised 5 items on interest in medical statistics and 3 on confidence understanding statistics. RESULTS Study participants were mostly college graduates (52%); 25% had a high school education or less. The mean age was 53 (range 20 to 84) years. Most paid attention to medical statistics (6% paid no attention). The mean (SD) STAT-interest score was 68 (17) and ranged from 15 to 100. Confidence in using statistics was also high: the mean (SD) STAT-confidence score was 65 (19) and ranged from 11 to 100. STAT-interest and STAT-confidence scores were moderately correlated (r=.36, P<.001). Both scales demonstrated good test–retest repeatability (r=.60, .62, respectively), internal consistency reliability (Cronbach's α=0.70 and 0.78), and usability (individual item nonresponse ranged from 0% to 1.3%). Scale scores correlated only weakly with scores on a medical data interpretation test (r=.15 and .26, respectively). CONCLUSION The STAT-interest and STAT-confidence scales are usable and reliable. Interest and confidence were only weakly related to the ability to actually use data. PMID:16307623

  7. Sample size, confidence, and contingency judgement.

    PubMed

    Clément, Mélanie; Mercier, Pierre; Pastò, Luigi

    2002-06-01

    According to statistical models, the acquisition function of contingency judgement is due to confidence increasing with sample size. According to associative models, the function reflects the accumulation of associative strength on which the judgement is based. Which view is right? Thirty university students assessed the relation between a fictitious medication and a symptom of skin discoloration in conditions that varied sample size (4, 6, 8 or 40 trials) and contingency (delta P = .20, .40, .60 or .80). Confidence was also collected. Contingency judgement was lower for smaller samples, while confidence level correlated inversely with sample size. This dissociation between contingency judgement and confidence contradicts the statistical perspective.

  8. Confidence level in performing clinical procedures among medical officers in nonspecialist government hospitals in Penang, Malaysia.

    PubMed

    Othman, Mohamad Sabri; Merican, Hassan; Lee, Yew Fong; Ch'ng, Kean Siang; Thurairatnam, Dharminy

    2015-03-01

    A prospective cross-sectional study was conducted at 3 government hospitals over 6 months to evaluate the confidence level of medical officers (MOs) to perform clinical procedure in nonspecialist government hospitals in Penang. An anonymous self-administered questionnaire in English was designed based on the elective and emergency procedures stated in the houseman training logbook. The questionnaire was distributed to the MOs from Penang State Health Department through the respective hospital directors and returned to Penang State Health Department on completion. The results showed that there was statistically significant difference between those who had undergone 12 months and 24 months as houseman in performing both elective and emergency procedures. MOs who had spent 24 months as housemen expressed higher confidence level than those who had only 12 months of experience. We also found that the confidence level was statistically and significantly influenced by visiting specialist and working together with cooperative experienced paramedics. © 2013 APJPH.

  9. Calculation of Confidence Intervals for the Maximum Magnitude of Earthquakes in Different Seismotectonic Zones of Iran

    NASA Astrophysics Data System (ADS)

    Salamat, Mona; Zare, Mehdi; Holschneider, Matthias; Zöller, Gert

    2017-03-01

    The problem of estimating the maximum possible earthquake magnitude m_max has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m_max is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event,the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4):1649-1659, 2011), the confidence interval for m_max is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating m_max from an earthquake catalog for reasonable levels of confidence alone is almost impossible.

  10. Statistics without Tears: Complex Statistics with Simple Arithmetic

    ERIC Educational Resources Information Center

    Smith, Brian

    2011-01-01

    One of the often overlooked aspects of modern statistics is the analysis of time series data. Modern introductory statistics courses tend to rush to probabilistic applications involving risk and confidence. Rarely does the first level course linger on such useful and fascinating topics as time series decomposition, with its practical applications…

  11. Using Asymptotic Results to Obtain a Confidence Interval for the Population Median

    ERIC Educational Resources Information Center

    Jamshidian, M.; Khatoonabadi, M.

    2007-01-01

    Almost all introductory and intermediate level statistics textbooks include the topic of confidence interval for the population mean. Almost all these texts introduce the median as a robust measure of central tendency. Only a few of these books, however, cover inference on the population median and in particular confidence interval for the median.…

  12. Confidence Intervals from Realizations of Simulated Nuclear Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younes, W.; Ratkiewicz, A.; Ressler, J. J.

    2017-09-28

    Various statistical techniques are discussed that can be used to assign a level of confidence in the prediction of models that depend on input data with known uncertainties and correlations. The particular techniques reviewed in this paper are: 1) random realizations of the input data using Monte-Carlo methods, 2) the construction of confidence intervals to assess the reliability of model predictions, and 3) resampling techniques to impose statistical constraints on the input data based on additional information. These techniques are illustrated with a calculation of the keff value, based on the 235U(n, f) and 239Pu (n, f) cross sections.

  13. Knowledge level of effect size statistics, confidence intervals and meta-analysis in Spanish academic psychologists.

    PubMed

    Badenes-Ribera, Laura; Frias-Navarro, Dolores; Pascual-Soler, Marcos; Monterde-I-Bort, Héctor

    2016-11-01

    The statistical reform movement and the American Psychological Association (APA) defend the use of estimators of the effect size and its confidence intervals, as well as the interpretation of the clinical significance of the findings. A survey was conducted in which academic psychologists were asked about their behavior in designing and carrying out their studies. The sample was composed of 472 participants (45.8% men). The mean number of years as a university professor was 13.56 years (SD= 9.27). The use of effect-size estimators is becoming generalized, as well as the consideration of meta-analytic studies. However, several inadequate practices still persist. A traditional model of methodological behavior based on statistical significance tests is maintained, based on the predominance of Cohen’s d and the unadjusted R2/η2, which are not immune to outliers or departure from normality and the violations of statistical assumptions, and the under-reporting of confidence intervals of effect-size statistics. The paper concludes with recommendations for improving statistical practice.

  14. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    PubMed

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  15. A General Framework for Power Analysis to Detect the Moderator Effects in Two- and Three-Level Cluster Randomized Trials

    ERIC Educational Resources Information Center

    Dong, Nianbo; Spybrook, Jessaca; Kelcey, Ben

    2016-01-01

    The purpose of this study is to propose a general framework for power analyses to detect the moderator effects in two- and three-level cluster randomized trials (CRTs). The study specifically aims to: (1) develop the statistical formulations for calculating statistical power, minimum detectable effect size (MDES) and its confidence interval to…

  16. Nurse leader certification preparation: how are confidence levels impacted?

    PubMed

    Junger, Stacey; Trinkle, Nicole; Hall, Norma

    2016-09-01

    The aim was to examine the effect of a nurse leader certification preparation course on the confidence levels of the participants. Limited literature is available regarding nurse leader development and certifications. Barriers exist related to lack of confidence, high cost, time and lack of access to a preparation course. Nurse leaders (n = 51) completed a pre- and post-survey addressing confidence levels of participants related to the topics addressed in the nurse leader certification preparation course. There were statistically significant increases in confidence levels related to all course content for the participants. At the time of the study, there were 31.4% of participants intending to sit for the certification examination, and 5 of the 51 participants successfully sat for and passed the examination. A nurse leader certification preparation course increases confidence levels of the participants and removes barriers, thereby increasing the number of certifications obtained. The health-care climate is increasingly complex and nurse leaders need the expertise to navigate the ever-changing health-care environment. Certification in a specialty, such as leadership, serves as an indicator of a high level of competence in the field. © 2016 John Wiley & Sons Ltd.

  17. Application of the Bootstrap Statistical Method in Deriving Vibroacoustic Specifications

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; Paez, Thomas L.

    2006-01-01

    This paper discusses the Bootstrap Method for specification of vibroacoustic test specifications. Vibroacoustic test specifications are necessary to properly accept or qualify a spacecraft and its components for the expected acoustic, random vibration and shock environments seen on an expendable launch vehicle. Traditionally, NASA and the U.S. Air Force have employed methods of Normal Tolerance Limits to derive these test levels based upon the amount of data available, and the probability and confidence levels desired. The Normal Tolerance Limit method contains inherent assumptions about the distribution of the data. The Bootstrap is a distribution-free statistical subsampling method which uses the measured data themselves to establish estimates of statistical measures of random sources. This is achieved through the computation of large numbers of Bootstrap replicates of a data measure of interest and the use of these replicates to derive test levels consistent with the probability and confidence desired. The comparison of the results of these two methods is illustrated via an example utilizing actual spacecraft vibroacoustic data.

  18. Radiologists' confidence in detecting abnormalities on chest images and their subjective judgments of image quality

    NASA Astrophysics Data System (ADS)

    King, Jill L.; Gur, David; Rockette, Howard E.; Curtin, Hugh D.; Obuchowski, Nancy A.; Thaete, F. Leland; Britton, Cynthia A.; Metz, Charles E.

    1991-07-01

    The relationship between subjective judgments of image quality for the performance of specific detection tasks and radiologists' confidence level in arriving at correct diagnoses was investigated in two studies in which 12 readers, using a total of three different display environments, interpreted a series of 300 PA chest images. The modalities used were conventional films, laser-printed films, and high-resolution CRT display of digitized images. For the detection of interstitial disease, nodules, and pneumothoraces, there was no statistically significant correlation (Spearman rho) between subjective ratings of quality and radiologists' confidence in detecting these abnormalities. However, in each study, for all modalities and all readers but one, a small but statistically significant correlation was found between the radiologists' ability to correctly and confidently rule out interstitial disease and their subjective ratings of image quality.

  19. A statistical framework for protein quantitation in bottom-up MS-based proteomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas

    2009-08-15

    ABSTRACT Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and confidence measures. Challenges include the presence of low-quality or incorrectly identified peptides and widespread, informative, missing data. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model for protein abundance in terms of peptide peak intensities, applicable to both label-based and label-free quantitation experiments. The model allows for both random and censoring missingness mechanisms and provides naturally for protein-level estimates and confidence measures. The model is also used to derive automated filtering and imputation routines. Three LC-MS datasets are used tomore » illustrate the methods. Availability: The software has been made available in the open-source proteomics platform DAnTE (Polpitiya et al. (2008)) (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu« less

  20. The effect of a workplace violence training program for generalist nurses in the acute hospital setting: A quasi-experimental study.

    PubMed

    Lamont, Scott; Brunero, Scott

    2018-05-19

    Workplace violence prevalence has attracted significant attention within the international nursing literature. Little attention to non-mental health settings and a lack of evaluation rigor have been identified within review literature. To examine the effects of a workplace violence training program in relation to risk assessment and management practices, de-escalation skills, breakaway techniques, and confidence levels, within an acute hospital setting. A quasi-experimental study of nurses using pretest-posttest measurements of educational objectives and confidence levels, with two week follow-up. A 440 bed metropolitan tertiary referral hospital in Sydney, Australia. Nurses working in specialties identified as a 'high risk' for violence. A pre-post-test design was used with participants attending a one day workshop. The workshop evaluation comprised the use of two validated questionnaires: the Continuing Professional Development Reaction questionnaire, and the Confidence in Coping with Patient Aggression Instrument. Descriptive and inferential statistics were calculated. The paired t-test was used to assess the statistical significance of changes in the clinical behaviour intention and confidence scores from pre- to post-intervention. Cohen's d effect sizes were calculated to determine the extent of the significant results. Seventy-eight participants completed both pre- and post-workshop evaluation questionnaires. Statistically significant increases in behaviour intention scores were found in fourteen of the fifteen constructs relating to the three broad workshop objectives, and confidence ratings, with medium to large effect sizes observed in some constructs. A significant increase in overall confidence in coping with patient aggression was also found post-test with large effect size. Positive results were observed from the workplace violence training. Training needs to be complimented by a multi-faceted organisational approach which includes governance, quality and review processes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. The assessment of data sources for influenza virologic surveillance in New York State.

    PubMed

    Escuyer, Kay L; Waters, Christine L; Gowie, Donna L; Maxted, Angie M; Farrell, Gregory M; Fuschino, Meghan E; St George, Kirsten

    2017-03-01

    Following the 2013 USA release of the Influenza Virologic Surveillance Right Size Roadmap, the New York State Department of Health (NYSDOH) embarked on an evaluation of data sources for influenza virologic surveillance. To assess NYS data sources, additional to data generated by the state public health laboratory (PHL), which could enhance influenza surveillance at the state and national level. Potential sources of laboratory test data for influenza were analyzed for quantity and quality. Computer models, designed to assess sample sizes and the confidence of data for statistical representation of influenza activity, were used to compare PHL test data to results from clinical and commercial laboratories, reported between June 8, 2013 and May 31, 2014. Sample sizes tested for influenza at the state PHL were sufficient for situational awareness surveillance with optimal confidence levels, only during peak weeks of the influenza season. Influenza data pooled from NYS PHLs and clinical laboratories generated optimal confidence levels for situational awareness throughout the influenza season. For novel influenza virus detection in NYS, combined real-time (rt) RT-PCR data from state and regional PHLs achieved ≥85% confidence during peak influenza activity, and ≥95% confidence for most of low season and all of off-season. In NYS, combined data from clinical, commercial, and public health laboratories generated optimal influenza surveillance for situational awareness throughout the season. Statistical confidence for novel virus detection, which is reliant on only PHL data, was achieved for most of the year. © 2016 The Authors. Influenza and Other Respiratory Viruses Published by John Wiley & Sons Ltd.

  2. Probabilistic Analysis for Comparing Fatigue Data Based on Johnson-Weibull Parameters

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2013-01-01

    Leonard Johnson published a methodology for establishing the confidence that two populations of data are different. Johnson's methodology is dependent on limited combinations of test parameters (Weibull slope, mean life ratio, and degrees of freedom) and a set of complex mathematical equations. In this report, a simplified algebraic equation for confidence numbers is derived based on the original work of Johnson. The confidence numbers calculated with this equation are compared to those obtained graphically by Johnson. Using the ratios of mean life, the resultant values of confidence numbers at the 99 percent level deviate less than 1 percent from those of Johnson. At a 90 percent confidence level, the calculated values differ between +2 and 4 percent. The simplified equation is used to rank the experimental lives of three aluminum alloys (AL 2024, AL 6061, and AL 7075), each tested at three stress levels in rotating beam fatigue, analyzed using the Johnson- Weibull method, and compared to the ASTM Standard (E739 91) method of comparison. The ASTM Standard did not statistically distinguish between AL 6061 and AL 7075. However, it is possible to rank the fatigue lives of different materials with a reasonable degree of statistical certainty based on combined confidence numbers using the Johnson- Weibull analysis. AL 2024 was found to have the longest fatigue life, followed by AL 7075, and then AL 6061. The ASTM Standard and the Johnson-Weibull analysis result in the same stress-life exponent p for each of the three aluminum alloys at the median, or L(sub 50), lives

  3. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  4. Correlating Student Knowledge and Confidence Using a Graded Knowledge Survey to Assess Student Learning in a General Microbiology Classroom †

    PubMed Central

    Favazzo, Lacey; Willford, John D.; Watson, Rachel M.

    2014-01-01

    Knowledge surveys are a type of confidence survey in which students rate their confidence in their ability to answer questions rather than answering the questions. These surveys have been discussed as a tool to evaluate student in-class or curriculum-wide learning. However, disagreement exists as to whether confidence is actually an accurate measure of knowledge. With the concomitant goals of assessing content-based learning objectives and addressing this disagreement, we present herein a pretest/posttest knowledge survey study that demonstrates a significant difference correctness on graded test questions at different levels of reported confidence in a multi-semester timeframe. Questions were organized into Bloom’s taxonomy, allowing for the data collected to further provide statistical analyses on strengths and deficits in various levels of Bloom’s reasoning with regard to mean correctness. Collectively, students showed increasing confidence and correctness in all levels of thought but struggled with synthesis-level questions. However, when students were only asked to rate confidence and not answer the accompanying test questions, they reported significantly higher confidence than the control group which was asked to do both. This indicates that when students do not attempt to answer questions, they have significantly greater confidence in their ability to answer those questions. Additionally, when students rate only confidence without answering the question, resolution across Bloom’s levels of reasoning is lost. Based upon our findings, knowledge surveys can be an effective tool for assessment of both breadth and depth of knowledge, but may require students to answer questions in addition to rating confidence to provide the most accurate data. PMID:25574291

  5. The role of digital tomosynthesis in reducing the number of equivocal breast reportings

    NASA Astrophysics Data System (ADS)

    Alakhras, Maram; Mello-Thoms, Claudia; Rickard, Mary; Bourne, Roger; Brennan, Patrick C.

    2015-03-01

    Purpose To compare radiologists' confidence in assessing breast cancer using combined digital mammography (DM) and digital breast tomosynthesis (DBT) compared with DM alone as a function of previous experience with DBT. Materials and Methods Institutional ethics approval was obtained. Twenty-three experienced breast radiologists reviewed 50 cases in two modes, DM alone and DM+DBT. Twenty-seven cases presented with breast cancer. Each radiologist was asked to detect breast lesions and give a confidence score of 1-5 (1- Normal, 2- Benign, 3- Equivocal, 4- Suspicious, 5- Malignant). Radiologists were divided into three sub-groups according to their prior experience with DBT (none, workshop experience, and clinical experience). Confidence scores using DM+DBT were compared with DM alone for all readers combined and for each DBT experience subgroup. Statistical analyses, using GraphPad Prism 5, were carried out using the Wilcoxon signed-rank test with statistical significance set at p< 0.05. Results Confidence scores were higher for true positive cancer cases using DM+DBT compared with DM alone for all readers (p < 0.0001). Confidence scores for normal cases were lower (indicating greater confidence in the non-cancer diagnosis) with DM+DBT compared with DM alone for all readers (p= 0.018) and readers with no prior DBT experience (p= 0.035). Conclusion Addition of DBT to DM increases the confidence level of radiologists in scoring cancer and normal/benign cases. This finding appears to apply across radiologists with varying levels of DBT experience, however further work involving greater numbers of radiologists is required.

  6. Providing responsive nursing care to new mothers with high and low confidence.

    PubMed

    Mantha, Shannon; Davies, Barbara; Moyer, Alwyn; Crowe, Katherine

    2008-01-01

    To describe new mothers' experiences with family-centered maternity care in relation to their confidence level and to determine how care could have been more responsive to their needs. Using data from a prospective Canadian survey of 596 postpartum women, a subsample of women with low and high confidence (N = 74) was selected. Data were analyzed using descriptive statistics and content analysis. Women with both high and low confidence expressed negative experiences with similar frequency (n = 47/74, 64%). Women wanted more nursing support for breastfeeding and postpartum teaching and education. Women who reported a language other than English or French as their first language were significantly less confident than English- and French-speaking women (p < .05). A multilevel framework about family-centered care is presented for healthcare providers in prenatal, labor and birth, and postpartum care. It is recommended that nurses ask new mothers about their confidence level and give special consideration to cultural background in order to provide supportive care in hospital and community settings.

  7. Impact of a critical care postgraduate certificate course on nurses' self-reported competence and confidence: A quasi-experimental study.

    PubMed

    Baxter, Rebecca; Edvardsson, David

    2018-06-01

    Postgraduate education is said to support the development of nurses' professional competence and confidence, essential to the delivery of safe and effective care. However, there is a shortness of empirical evidence to demonstrate an increase to nurses' self-reported confidence and competence on completion of critical care postgraduate certificate-level education. To explore the impact of a critical care postgraduate certificate course on nurses' self-reported competence and confidence. To explore the psychometric properties and performance of the Critical Care Competence and Confidence Questionnaire. A quasi-experimental pre/post-test design. A total population sample of nurses completing a critical care postgraduate certificate course at an Australian University. The Critical Care Competence and Confidence Questionnaire was developed for this study to measure nurses' self-reported competence and confidence at baseline and follow up. Descriptive and inferential statistics were used to explore sample characteristics and changes between baseline and follow-up. Reliability of the questionnaire was explored using Cronbach's Alpha and item-total correlations. There was a statistically significant increase in competence and confidence between baseline and follow-up across all questionnaire domains. Satisfactory reliability estimates were found for the questionnaire. Completion of a critical care postgraduate certificate course significantly increased nurses' perceived competence and confidence. The Critical Care Competence and Confidence Questionnaire was found to be psychometrically sound for measuring nurses' self-reported competence and confidence. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Multiple linear regression analysis

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1980-01-01

    Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

  9. Assessment of NDE Reliability Data

    NASA Technical Reports Server (NTRS)

    Yee, B. G. W.; Chang, F. H.; Couchman, J. C.; Lemon, G. H.; Packman, P. F.

    1976-01-01

    Twenty sets of relevant Nondestructive Evaluation (NDE) reliability data have been identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations has been formulated. A model to grade the quality and validity of the data sets has been developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, have been formulated for each NDE method. A comprehensive computer program has been written to calculate the probability of flaw detection at several confidence levels by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. Probability of detection curves at 95 and 50 percent confidence levels have been plotted for individual sets of relevant data as well as for several sets of merged data with common sets of NDE parameters.

  10. Critical Appraisal Skills Among Canadian Obstetrics and Gynaecology Residents: How Do They Fare?

    PubMed

    Bougie, Olga; Posner, Glenn; Black, Amanda Y

    2015-07-01

    Evidence-based medicine has become the standard of care in clinical practice. In this study, our objectives were to (1) determine the type of epidemiology and/or biostatistical training being given in Canadian obstetrics and gynaecology post-graduate programs, (2) determine obstetrics and gynaecology residents' level of confidence with critical appraisal, and (3) assess knowledge of fundamental biostatistical and epidemiological principles among Canadian obstetrics and gynaecology trainees. During a national standardized in-training examination, all Canadian obstetrics and gynaecology residents were invited to complete an anonymous cross-sectional survey to determine their levels of confidence with critical appraisal. Fifteen critical appraisal questions were integrated into the standardized examination to assess critical appraisal skills objectively. Primary outcomes were the residents' level of confidence interpreting biostatistical results and applying research findings to clinical practice, their desire for more biostatistics/epidemiological training in residency, and their performance on knowledge questions. A total of 301 of 355 residents completed the survey (response rate=84.8%). Most (76.7%) had little/no confidence interpreting research statistics. Confidence was significantly higher in those with increased seniority (OR=1.93), in those who had taken a previous epidemiology/statistics course (OR=2.65), and in those who had prior publications (OR=1.82). Many (68%) had little/no confidence applying research findings to clinical practice. Confidence increased significantly with increasing training year (P<0.001) and with formal epidemiology training during residency (OR=2.01). The mean score of the 355 residents on the knowledge assessment questions was 69.8%. Increasing seniority was associated with improved overall test performance (P=0.02). Poorer performance topics included analytical study method (9.9%), study design (36.9%), and sample size (42.0%). Most (84.4%) wanted more epidemiology teaching. Canadian obstetrics and gynaecology residents may have the biostatistical and epidemiological knowledge to interpret results published in the literature, but lack confidence applying these skills in clinical settings. Most residents want additional training in these areas, and residency programs should include training in formal curriculums to improve their confidence and prepare them for a lifelong practice of evidence-based medicine.

  11. Integrating Formal Methods and Testing 2002

    NASA Technical Reports Server (NTRS)

    Cukic, Bojan

    2002-01-01

    Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.

  12. Stress response and communication in surgeons undergoing training in endoscopic management of major vessel hemorrhage: a mixed methods study.

    PubMed

    Jukes, Alistair K; Mascarenhas, Annika; Murphy, Jae; Stepan, Lia; Muñoz, Tamara N; Callejas, Claudio A; Valentine, Rowan; Wormald, P J; Psaltis, Alkis J

    2017-06-01

    Major vessel hemorrhage in endoscopic, endonasal skull-base surgery is a rare but potentially fatal event. Surgical simulation models have been developed to train surgeons in the techniques required to manage this complication. This mixed-methods study aims to quantify the stress responses the model induces, determine how realistic the experience is, and how it changes the confidence levels of surgeons in their ability to deal with major vascular injury in an endoscopic setting. Forty consultant surgeons and surgeons in training underwent training on an endoscopic sheep model of jugular vein and carotid artery injury. Pre-course and post-course questionnaires providing demographics, experience level, confidence, and realism scores were taken, based on a 5-point Likert scale. Objective markers of stress response including blood pressure, heart rate, and salivary alpha-amylase levels were measured. Mean "realism" score assessed posttraining showed the model to be perceived as highly realistic by the participants (score 4.02). Difference in participant self-rated pre-course and post-course confidence levels was significant (p < 0.0001): mean pre-course confidence level 1.66 (95% confidence interval [CI], 1.43 to 1.90); mean post-course confidence level 3.42 (95% CI, 3.19 to 3.65). Differences in subjects' heart rates (HRs) and mean arterial blood pressures (MAPs) were significant between injury models (p = 0.0008, p = 0.0387, respectively). No statistically significant difference in salivary alpha-amylase levels pretraining and posttraining was observed. Results from this study indicate that this highly realistic simulation model provides surgeons with an increased level of confidence in their ability to deal with the rare but potentially catastrophic event of major vessel injury in endoscopic skull-base surgery. © 2017 ARS-AAOA, LLC.

  13. A rational approach to legacy data validation when transitioning between electronic health record systems.

    PubMed

    Pageler, Natalie M; Grazier G'Sell, Max Jacob; Chandler, Warren; Mailes, Emily; Yang, Christine; Longhurst, Christopher A

    2016-09-01

    The objective of this project was to use statistical techniques to determine the completeness and accuracy of data migrated during electronic health record conversion. Data validation during migration consists of mapped record testing and validation of a sample of the data for completeness and accuracy. We statistically determined a randomized sample size for each data type based on the desired confidence level and error limits. The only error identified in the post go-live period was a failure to migrate some clinical notes, which was unrelated to the validation process. No errors in the migrated data were found during the 12- month post-implementation period. Compared to the typical industry approach, we have demonstrated that a statistical approach to sampling size for data validation can ensure consistent confidence levels while maximizing efficiency of the validation process during a major electronic health record conversion. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling

    ERIC Educational Resources Information Center

    Banjanovic, Erin S.; Osborne, Jason W.

    2016-01-01

    Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…

  15. Teach a Confidence Interval for the Median in the First Statistics Course

    ERIC Educational Resources Information Center

    Howington, Eric B.

    2017-01-01

    Few introductory statistics courses consider statistical inference for the median. This article argues in favour of adding a confidence interval for the median to the first statistics course. Several methods suitable for introductory statistics students are identified and briefly reviewed.

  16. Reflexion on linear regression trip production modelling method for ensuring good model quality

    NASA Astrophysics Data System (ADS)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  17. Technique for estimating the magnitude and frequency of floods in Texas.

    DOT National Transportation Integrated Search

    1977-01-01

    Drainage area, slope, and mean annual precipitation were the only : factors that were statistically significant at the 95-percent confidence : level when the characteristics of the drainage basins were used as independent variables in a multiple-regr...

  18. Adhesive properties and adhesive joints strength of graphite/epoxy composites

    NASA Astrophysics Data System (ADS)

    Rudawska, Anna; Stančeková, Dana; Cubonova, Nadezda; Vitenko, Tetiana; Müller, Miroslav; Valášek, Petr

    2017-05-01

    The article presents the results of experimental research of the adhesive joints strength of graphite/epoxy composites and the results of the surface free energy of the composite surfaces. Two types of graphite/epoxy composites with different thickness were tested which are used to aircraft structure. The single-lap adhesive joints of epoxy composites were considered. Adhesive properties were described by surface free energy. Owens-Wendt method was used to determine surface free energy. The epoxy two-component adhesive was used to preparing the adhesive joints. Zwick/Roell 100 strength device were used to determination the shear strength of adhesive joints of epoxy composites. The strength test results showed that the highest value was obtained for adhesive joints of graphite-epoxy composite of smaller material thickness (0.48 mm). Statistical analysis of the results obtained, the study showed statistically significant differences between the values of the strength of the confidence level of 0.95. The statistical analysis of the results also showed that there are no statistical significant differences in average values of surface free energy (0.95 confidence level). It was noted that in each of the results the dispersion component of surface free energy was much greater than polar component of surface free energy.

  19. Investigating the psychological resilience, self-confidence and problem-solving skills of midwife candidates.

    PubMed

    Ertekin Pinar, Sukran; Yildirim, Gulay; Sayin, Neslihan

    2018-05-01

    The high level of psychological resilience, self-confidence and problem solving skills of midwife candidates play an important role in increasing the quality of health care and in fulfilling their responsibilities towards patients. This study was conducted to investigate the psychological resilience, self-confidence and problem-solving skills of midwife candidates. It is a convenience descriptive quantitative study. Students who study at Health Sciences Faculty in Turkey's Central Anatolia Region. Midwife candidates (N = 270). In collection of data, the Personal Information Form, Psychological Resilience Scale for Adults (PRSA), Self-Confidence Scale (SCS), and Problem Solving Inventory (PSI) were used. There was a negatively moderate-level significant relationship between the Problem Solving Inventory scores and the Psychological Resilience Scale for Adults scores (r = -0.619; p = 0.000), and between Self-Confidence Scale scores (r = -0.524; p = 0.000). There was a positively moderate-level significant relationship between the Psychological Resilience Scale for Adults scores and the Self-Confidence Scale scores (r = 0.583; p = 0.000). There was a statistically significant difference (p < 0.05) between the Problem Solving Inventory and the Psychological Resilience Scale for Adults scores according to getting support in a difficult situation. As psychological resilience and self-confidence levels increase, problem-solving skills increase; additionally, as self-confidence increases, psychological resilience increases too. Psychological resilience, self-confidence, and problem-solving skills of midwife candidates in their first-year of studies are higher than those who are in their fourth year. Self-confidence and psychological resilience of midwife candidates aged between 17 and 21, self-confidence and problem solving skills of residents of city centers, psychological resilience of those who perceive their monthly income as sufficient are high. Psychological resilience and problem-solving skills for midwife candidates who receive social support are also high. The fact that levels of self-confidence, problem-solving skills and psychological resilience of fourth-year students are found to be low presents a situation that should be taken into consideration. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Evaluation of a Local Anesthesia Simulation Model with Dental Students as Novice Clinicians.

    PubMed

    Lee, Jessica S; Graham, Roseanna; Bassiur, Jennifer P; Lichtenthal, Richard M

    2015-12-01

    The aim of this study was to evaluate the use of a local anesthesia (LA) simulation model in a facilitated small group setting before dental students administered an inferior alveolar nerve block (IANB) for the first time. For this pilot study, 60 dental students transitioning from preclinical to clinical education were randomly assigned to either an experimental group (N=30) that participated in a small group session using the simulation model or a control group (N=30). After administering local anesthesia for the first time, students in both groups were given questionnaires regarding levels of preparedness and confidence when administering an IANB and level of anesthesia effectiveness and pain when receiving an IANB. Students in the experimental group exhibited a positive difference on all six questions regarding preparedness and confidence when administering LA to another student. One of these six questions ("I was prepared in administering local anesthesia for the first time") showed a statistically significant difference (p<0.05). Students who received LA from students who practiced on the simulation model also experienced fewer post-injection complications one day after receiving the IANB, including a statistically significant reduction in trismus. No statistically significant difference was found in level of effectiveness of the IANB or perceived levels of pain between the two groups. The results of this pilot study suggest that using a local anesthesia simulation model may be beneficial in increasing a dental student's level of comfort prior to administering local anesthesia for the first time.

  1. Exact one-sided confidence limits for the difference between two correlated proportions.

    PubMed

    Lloyd, Chris J; Moldovan, Max V

    2007-08-15

    We construct exact and optimal one-sided upper and lower confidence bounds for the difference between two probabilities based on matched binary pairs using well-established optimality theory of Buehler. Starting with five different approximate lower and upper limits, we adjust them to have coverage probability exactly equal to the desired nominal level and then compare the resulting exact limits by their mean size. Exact limits based on the signed root likelihood ratio statistic are preferred and recommended for practical use.

  2. Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.

    PubMed

    Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L

    2012-12-01

    Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.

  3. The confidence and knowledge of health practitioners when interacting with people with aphasia in a hospital setting.

    PubMed

    Cameron, Ashley; McPhail, Steven; Hudson, Kyla; Fleming, Jennifer; Lethlean, Jennifer; Tan, Ngang Ju; Finch, Emma

    2018-06-01

    The aim of the study was to describe and compare the confidence and knowledge of health professionals (HPs) with and without specialized speech-language training for communicating with people with aphasia (PWA) in a metropolitan hospital setting. Ninety HPs from multidisciplinary teams completed a customized survey to identify their demographic information, knowledge of aphasia, current use of supported conversation strategies and overall communication confidence when interacting with PWA using a 100 mm visual analogue scale (VAS) to rate open-ended questions. Conventional descriptive statistics were used to examine the demographic information. Descriptive statistics and the Mann-Whitney U test were used to analyse VAS confidence rating data. The responses to the open-ended survey questions were grouped into four previously identified key categories. The HPs consisted of 22 (24.4%) participants who were speech-language pathologists and 68 (75.6%) participants from other disciplines (non-speech-language pathology HPs, non-SLP HPs). The non-SLP HPs reported significantly lower confidence levels (U = 159.0, p < 0.001, two-tailed) and identified fewer strategies for communicating effectively with PWA than the trained speech-language pathologists. The non-SLP HPs identified a median of two strategies identified [interquartile range (IQR) 1-3] in contrast to the speech-language pathologists who identified a median of eight strategies (IQR 7-12). These findings suggest that HPs, particularly those without specialized communication education, are likely to benefit from formal training to enhance their confidence, skills and ability to successfully communicate with PWA in their work environment. This may in turn increase the involvement of PWA in their health care decisions. Implications for Rehabilitation Interventions to remediate health professional's (particularly non-speech-language pathology health professionals) lower levels of confidence and ability to communicate with PWA may ultimately help ensure equal access for PWA. Promote informed collaborative decision-making, and foster patient-centred care within the health care setting.

  4. Statistical test for ΔρDCCA cross-correlation coefficient

    NASA Astrophysics Data System (ADS)

    Guedes, E. F.; Brito, A. A.; Oliveira Filho, F. M.; Fernandez, B. F.; de Castro, A. P. N.; da Silva Filho, A. M.; Zebende, G. F.

    2018-07-01

    In this paper we propose a new statistical test for ΔρDCCA, Detrended Cross-Correlation Coefficient Difference, a tool to measure contagion/interdependence effect in time series of size N at different time scale n. For this proposition we analyzed simulated and real time series. The results showed that the statistical significance of ΔρDCCA depends on the size N and the time scale n, and we can define a critical value for this dependency in 90%, 95%, and 99% of confidence level, as will be shown in this paper.

  5. An Analysis of Operational Suitability for Test and Evaluation of Highly Reliable Systems

    DTIC Science & Technology

    1994-03-04

    Exposition," Journal of the American Statistical A iation-59: 353-375 (June 1964). 17. SYS 229, Test and Evaluation Management Coursebook , School of Systems...in hours, 0 is 2-5 the desired MTBCF in hours, R is the number of critical failures, and a is the P[type-I error] of the X2 statistic with 2*R+2...design of experiments (DOE) tables and the use of Bayesian statistics to increase the confidence level of the test results that will be obtained from

  6. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  7. Probability of detection of internal voids in structural ceramics using microfocus radiography

    NASA Technical Reports Server (NTRS)

    Baaklini, G. Y.; Roth, D. J.

    1986-01-01

    The reliability of microfocous X-radiography for detecting subsurface voids in structural ceramic test specimens was statistically evaluated. The microfocus system was operated in the projection mode using low X-ray photon energies (20 keV) and a 10 micro m focal spot. The statistics were developed for implanted subsurface voids in green and sintered silicon carbide and silicon nitride test specimens. These statistics were compared with previously-obtained statistics for implanted surface voids in similar specimens. Problems associated with void implantation are discussed. Statistical results are given as probability-of-detection curves at a 95 precent confidence level for voids ranging in size from 20 to 528 micro m in diameter.

  8. Probability of detection of internal voids in structural ceramics using microfocus radiography

    NASA Technical Reports Server (NTRS)

    Baaklini, G. Y.; Roth, D. J.

    1985-01-01

    The reliability of microfocus x-radiography for detecting subsurface voids in structural ceramic test specimens was statistically evaluated. The microfocus system was operated in the projection mode using low X-ray photon energies (20 keV) and a 10 micro m focal spot. The statistics were developed for implanted subsurface voids in green and sintered silicon carbide and silicon nitride test specimens. These statistics were compared with previously-obtained statistics for implanted surface voids in similar specimens. Problems associated with void implantation are discussed. Statistical results are given as probability-of-detection curves at a 95 percent confidence level for voids ranging in size from 20 to 528 micro m in diameter.

  9. Statistical Assessment of a Paired-site Approach for Verification of Carbon and Nitrogen Sequestration on CRP Land

    NASA Astrophysics Data System (ADS)

    Kucharik, C.; Roth, J.

    2002-12-01

    The threat of global climate change has provoked policy-makers to consider plausible strategies to slow the accumulation of greenhouse gases, especially carbon dioxide, in the atmosphere. One such idea involves the sequestration of atmospheric carbon (C) in degraded agricultural soils as part of the Conservation Reserve Program (CRP). While the potential for significant C sequestration in CRP grassland ecosystems has been demonstrated, the paired-site sampling approach traditionally used to quantify soil C changes has not been evaluated with robust statistical analysis. In this study, 14 paired CRP (> 8 years old) and cropland sites in Dane County, Wisconsin (WI) were used to assess whether a paired-site sampling design could detect statistically significant differences (ANOVA) in mean soil organic C and total nitrogen (N) storage. We compared surface (0 to 10 cm) bulk density, and sampled soils (0 to 5, 5 to 10, and 10 to 25 cm) for textural differences and chemical analysis of organic matter (OM), soil organic C (SOC), total N, and pH. The CRP contributed to lowering soil bulk density by 13% (p < 0.0001) and increased SOC and OM storage (kg m-2) by 13 to 17% in the 0 to 5 cm layer (p = 0.1). We tested the statistical power associated with ANOVA for measured soil properties, and calculated minimum detectable differences (MDD). We concluded that 40 to 65 paired sites and soil sampling in 5 cm increments near the surface were needed to achieve an 80% confidence level (α = 0.05; β = 0.20) in soil C and N sequestration rates. Because soil C and total N storage was highly variable among these sites (CVs > 20%), only a 23 to 29% change in existing total organic C and N pools could be reliably detected. While C and N sequestration (247 kg C ha{-1 } yr-1 and 17 kg N ha-1 yr-1) may be occurring and confined to the surface 5 cm as part of the WI CRP, our sampling design did not statistically support the desired 80% confidence level. We conclude that usage of statistical power analysis is essential to insure a high level of confidence in soil C and N sequestration rates that are quantified using paired plots.

  10. A statistical analysis of the impact of advertising signs on road safety.

    PubMed

    Yannis, George; Papadimitriou, Eleonora; Papantoniou, Panagiotis; Voulgari, Chrisoula

    2013-01-01

    This research aims to investigate the impact of advertising signs on road safety. An exhaustive review of international literature was carried out on the effect of advertising signs on driver behaviour and safety. Moreover, a before-and-after statistical analysis with control groups was applied on several road sites with different characteristics in the Athens metropolitan area, in Greece, in order to investigate the correlation between the placement or removal of advertising signs and the related occurrence of road accidents. Road accident data for the 'before' and 'after' periods on the test sites and the control sites were extracted from the database of the Hellenic Statistical Authority, and the selected 'before' and 'after' periods vary from 2.5 to 6 years. The statistical analysis shows no statistical correlation between road accidents and advertising signs in none of the nine sites examined, as the confidence intervals of the estimated safety effects are non-significant at 95% confidence level. This can be explained by the fact that, in the examined road sites, drivers are overloaded with information (traffic signs, directions signs, labels of shops, pedestrians and other vehicles, etc.) so that the additional information load from advertising signs may not further distract them.

  11. Developing information fluency in introductory biology students in the context of an investigative laboratory.

    PubMed

    Lindquester, Gary J; Burks, Romi L; Jaslow, Carolyn R

    2005-01-01

    Students of biology must learn the scientific method for generating information in the field. Concurrently, they should learn how information is reported and accessed. We developed a progressive set of exercises for the undergraduate introductory biology laboratory that combine these objectives. Pre- and postassessments of approximately 100 students suggest that increases occurred, some statistically significant, in the number of students using various library-related resources, in the numbers and confidence level of students using various technologies, and in the numbers and confidence levels of students involved in various activities related to the scientific method. Following this course, students should be better prepared for more advanced and independent study.

  12. Developing Information Fluency in Introductory Biology Students in the Context of an Investigative Laboratory

    PubMed Central

    2005-01-01

    Students of biology must learn the scientific method for generating information in the field. Concurrently, they should learn how information is reported and accessed. We developed a progressive set of exercises for the undergraduate introductory biology laboratory that combine these objectives. Pre- and postassessments of approximately 100 students suggest that increases occurred, some statistically significant, in the number of students using various library-related resources, in the numbers and confidence level of students using various technologies, and in the numbers and confidence levels of students involved in various activities related to the scientific method. Following this course, students should be better prepared for more advanced and independent study. PMID:15746979

  13. Evaluation of the 3M™ Molecular Detection Assay (MDA) 2 - Salmonella for the Detection of Salmonella spp. in Select Foods and Environmental Surfaces: Collaborative Study, First Action 2016.01.

    PubMed

    Bird, Patrick; Flannery, Jonathan; Crowley, Erin; Agin, James R; Goins, David; Monteroso, Lisa

    2016-07-01

    The 3M™ Molecular Detection Assay (MDA) 2 - Salmonella uses real-time isothermal technology for the rapid and accurate detection of Salmonella spp. from enriched select food, feed, and food-process environmental samples. The 3M MDA 2 - Salmonella was evaluated in a multilaboratory collaborative study using an unpaired study design. The 3M MDA 2 - Salmonella was compared to the U.S. Food and Drug Administration Bacteriological Analytical Manual Chapter 5 reference method for the detection of Salmonella in creamy peanut butter, and to the U.S. Department of Agriculture, Food Safety and Inspection Service Microbiology Laboratory Guidebook Chapter 4.08 reference method "Isolation and Identification of Salmonella from Meat, Poultry, Pasteurized Egg and Catfish Products and Carcass and Environmental Samples" for the detection of Salmonella in raw ground beef (73% lean). Technicians from 16 laboratories located within the continental United States participated. Each matrix was evaluated at three levels of contamination: an uninoculated control level (0 CFU/test portion), a low inoculum level (0.2-2 CFU/test portion), and a high inoculum level (2-5 CFU/test portion). Statistical analysis was conducted according to the probability of detection (POD) statistical model. Results obtained for the low inoculum level test portions produced difference in collaborator POD values of 0.03 (95% confidence interval, -0.10 to 0.16) for raw ground beef and 0.06 (95% confidence interval, -0.06 to 0.18) for creamy peanut butter, indicating no statistically significant difference between the candidate and reference methods.

  14. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret

    This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…

  15. Evaluating Independent Proportions for Statistical Difference, Equivalence, Indeterminacy, and Trivial Difference Using Inferential Confidence Intervals

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2009-01-01

    Tryon presented a graphic inferential confidence interval (ICI) approach to analyzing two independent and dependent means for statistical difference, equivalence, replication, indeterminacy, and trivial difference. Tryon and Lewis corrected the reduction factor used to adjust descriptive confidence intervals (DCIs) to create ICIs and introduced…

  16. ON THE SUBJECT OF HYPOTHESIS TESTING

    PubMed Central

    Ugoni, Antony

    1993-01-01

    In this paper, the definition of a statistical hypothesis is discussed, and the considerations which need to be addressed when testing a hypothesis. In particular, the p-value, significance level, and power of a test are reviewed. Finally, the often quoted confidence interval is given a brief introduction. PMID:17989768

  17. Comparison between Couple Attachment Styles, Stress Coping Styles and Self-Esteem Levels

    ERIC Educational Resources Information Center

    Çolakkadioglu, Oguzhan; Akbas, Turan; Uslu, Sevcan Karabulut

    2017-01-01

    Data were acquired from a total of 422 university students with 216 female and 206 male students via Couple Attachment Scale, Stress Coping Styles Scale and Coopersmith Self-Esteem Inventory. Positive and statistically significant relationships were determined between self-confident approach, optimistic approach and social support approach…

  18. Methods for calculating confidence and credible intervals for the residual between-study variance in random effects meta-regression models

    PubMed Central

    2014-01-01

    Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829

  19. Final year dental students in New Zealand: Self-reported confidence levels prior to BDS graduation.

    PubMed

    Murray, C; Chandler, N

    2016-12-01

    It is expected that the graduating dental student will have acquired the skills and knowledge to confidently treat most circumstances that they may encounter in private practice. The aims of this study were to evaluate final year dental students' self-reported levels of confidence in expected core skills just prior to graduation and to explore their career intentions both directly after graduating as well as in the longer term. After ethical approval was obtained, a survey and participant information sheet was distributed to all final year undergraduate dental students in 2014. Statistical analysis was carried out using SPSS version 22.0 with the alpha value set at 0.05. The response rate was 69% (58/84). Most (44.8%) were going to be working in New Zealand private practices with 34.5% definitely considering specializing. The majority reported high self-confidence levels for sealant restorations (96.6%) and radiography (94.8%), while very few were confident in carrying out soft tissue biopsies (1 .8%) or restoring dental implants and treating medical emergencies (10.5%). Some gender differences were found. The general finding was that most NZ graduates perceived themselves to be confident in managing the most fundamental aspects of general practice. Similar to their counterparts around the world, they will benefit from further mentoring and additional exposure to the more complex clinical tasks such as the restoration of implants and soft tissue biopsies.

  20. Probabilistic Analysis for Comparing Fatigue Data Based on Johnson-Weibull Parameters

    NASA Technical Reports Server (NTRS)

    Hendricks, Robert C.; Zaretsky, Erwin V.; Vicek, Brian L.

    2007-01-01

    Probabilistic failure analysis is essential when analysis of stress-life (S-N) curves is inconclusive in determining the relative ranking of two or more materials. In 1964, L. Johnson published a methodology for establishing the confidence that two populations of data are different. Simplified algebraic equations for confidence numbers were derived based on the original work of L. Johnson. Using the ratios of mean life, the resultant values of confidence numbers deviated less than one percent from those of Johnson. It is possible to rank the fatigue lives of different materials with a reasonable degree of statistical certainty based on combined confidence numbers. These equations were applied to rotating beam fatigue tests that were conducted on three aluminum alloys at three stress levels each. These alloys were AL 2024, AL 6061, and AL 7075. The results were analyzed and compared using ASTM Standard E739-91 and the Johnson-Weibull analysis. The ASTM method did not statistically distinguish between AL 6010 and AL 7075. Based on the Johnson-Weibull analysis confidence numbers greater than 99 percent, AL 2024 was found to have the longest fatigue life, followed by AL 7075, and then AL 6061. The ASTM Standard and the Johnson-Weibull analysis result in the same stress-life exponent p for each of the three aluminum alloys at the median or L(sub 50) lives.

  1. Confidence level in venipuncture and knowledge on causes of in vitro hemolysis among healthcare professionals.

    PubMed

    Milutinović, Dragana; Andrijević, Ilija; Ličina, Milijana; Andrijević, Ljiljana

    2015-01-01

    This study aimed to assess confidence level of healthcare professionals in venipuncture and their knowledge on the possible causes of in vitro hemolysis. A sample of 94 healthcare professionals (nurses and laboratory technicians) participated in this survey study. A four-section questionnaire was used as a research instrument comprising general information for research participants, knowledge on possible causes of in vitro hemolysis due to type of material used and venipuncture technique and specimen handling, as well as assessment of healthcare professionals' confidence level in their own ability to perform first and last venipuncture. The average score on the knowledge test was higher in nurses' than in laboratory technicians (8.11±1.7, and 7.4±1.5, respectively). The difference in average scores was statistically significant (P=0.035) and Cohen's d in the range of 0.4 indicates that there is a moderate difference on the knowledge test among the health care workers. Only 11/94 of healthcare professionals recognized that blood sample collection from cannula and evacuated tube is method which contributes most to the occurrence of in vitro hemolysis, whereas most risk factors affecting occurrence of in vitro hemolysis during venipuncture were recognized. There were no significant differences in mean score on the knowledge test in relation to the confidence level in venipuncture (P=0.551). Confidence level at last venipuncture among both profiles of healthcare staff was very high, but they showed insufficient knowledge about possible factors affecting hemolysis due to materials used in venipuncture compared with factors due to venipuncture technique and handling of blood sample.

  2. Noise induced hearing loss of forest workers in Turkey.

    PubMed

    Tunay, M; Melemez, K

    2008-09-01

    In this study, a total number of 114 workers who were in 3 different groups in terms of age and work underwent audiometric analysis. In order to determine whether there was a statistically significant difference between the hearing loss levels of the workers who were included in the study, variance analysis was applied with the help of the data obtained as a result of the evaluation. Correlation and regression analysis were applied in order to determine the relations between hearing loss and their age and their time of work. As a result of the variance analysis, statistically significant differences were found at 500, 2000 and 4000 Hz frequencies. The most specific difference was observed among chainsaw machine operators at 4000 Hz frequency, which was determined by the variance analysis. As a result of the correlation analysis, significant relations were found between time of work and hearing loss in 0.01 confidence level and between age and hearing loss in 0.05 confidence level. Forest workers using chainsaw machines should be informed, they should wear or use protective materials and less noising chainsaw machines should be used if possible and workers should undergo audiometric tests when they start work and once a year.

  3. Applying Bootstrap Resampling to Compute Confidence Intervals for Various Statistics with R

    ERIC Educational Resources Information Center

    Dogan, C. Deha

    2017-01-01

    Background: Most of the studies in academic journals use p values to represent statistical significance. However, this is not a good indicator of practical significance. Although confidence intervals provide information about the precision of point estimation, they are, unfortunately, rarely used. The infrequent use of confidence intervals might…

  4. Archival Legacy Investigations of Circumstellar Environments (ALICE): Statistical assessment of point source detections

    NASA Astrophysics Data System (ADS)

    Choquet, Élodie; Pueyo, Laurent; Soummer, Rémi; Perrin, Marshall D.; Hagan, J. Brendan; Gofas-Salas, Elena; Rajan, Abhijith; Aguilar, Jonathan

    2015-09-01

    The ALICE program, for Archival Legacy Investigation of Circumstellar Environment, is currently conducting a virtual survey of about 400 stars, by re-analyzing the HST-NICMOS coronagraphic archive with advanced post-processing techniques. We present here the strategy that we adopted to identify detections and potential candidates for follow-up observations, and we give a preliminary overview of our detections. We present a statistical analysis conducted to evaluate the confidence level on these detection and the completeness of our candidate search.

  5. Counting Better? An Examination of the Impact of Quantitative Method Teaching on Statistical Anxiety and Confidence

    ERIC Educational Resources Information Center

    Chamberlain, John Martyn; Hillier, John; Signoretta, Paola

    2015-01-01

    This article reports the results of research concerned with students' statistical anxiety and confidence to both complete and learn to complete statistical tasks. Data were collected at the beginning and end of a quantitative method statistics module. Students recognised the value of numeracy skills but felt they were not necessarily relevant for…

  6. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    PubMed Central

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  7. 76 FR 9696 - Equipment Price Forecasting in Energy Conservation Standards Analysis

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-22

    ... for particular efficiency design options, an empirical experience curve fit to the available data may be used to forecast future costs of such design option technologies. If a statistical evaluation indicates a low level of confidence in estimates of the design option cost trend, this method should not be...

  8. A Statistical Method for Synthesizing Mediation Analyses Using the Product of Coefficient Approach Across Multiple Trials

    PubMed Central

    Huang, Shi; MacKinnon, David P.; Perrino, Tatiana; Gallo, Carlos; Cruden, Gracelyn; Brown, C Hendricks

    2016-01-01

    Mediation analysis often requires larger sample sizes than main effect analysis to achieve the same statistical power. Combining results across similar trials may be the only practical option for increasing statistical power for mediation analysis in some situations. In this paper, we propose a method to estimate: 1) marginal means for mediation path a, the relation of the independent variable to the mediator; 2) marginal means for path b, the relation of the mediator to the outcome, across multiple trials; and 3) the between-trial level variance-covariance matrix based on a bivariate normal distribution. We present the statistical theory and an R computer program to combine regression coefficients from multiple trials to estimate a combined mediated effect and confidence interval under a random effects model. Values of coefficients a and b, along with their standard errors from each trial are the input for the method. This marginal likelihood based approach with Monte Carlo confidence intervals provides more accurate inference than the standard meta-analytic approach. We discuss computational issues, apply the method to two real-data examples and make recommendations for the use of the method in different settings. PMID:28239330

  9. The effects of pediatric community simulation experience on the self-confidence and satisfaction of baccalaureate nursing students: A quasi-experimental study.

    PubMed

    Lubbers, Jaclynn; Rossman, Carol

    2016-04-01

    Simulation in nursing education is a means to transform student learning and respond to decreasing clinical site availability. This study proposed an innovative simulation experience where students completed community based clinical hours with simulation scenarios. The purpose of this study was to determine the effects of a pediatric community simulation experience on the self-confidence of nursing students. Bandura's (1977) Self-Efficacy Theory and Jeffries' (2005) Nursing Education Simulation Framework were used. This quasi-experimental study collected data using a pre-test and posttest tool. The setting was a private, liberal arts college in the Midwestern United States. Fifty-four baccalaureate nursing students in a convenience sample were the population of interest. The sample was predominantly female with very little exposure to simulation prior to this study. The participants completed a 16-item self-confidence instrument developed for this study which measured students' self-confidence in pediatric community nursing knowledge, skill, communication, and documentation. The overall study showed statistically significant results (t=20.70, p<0.001) and statistically significant results within each of the eight 4-item sub-scales (p<0.001). Students also reported a high level of satisfaction with their simulation experience. The data demonstrate that students who took the Pediatric Community Based Simulation course reported higher self-confidence after the course than before the course. Higher self-confidence scores for simulation participants have been shown to increase quality of care for patients. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Towards the estimation of effect measures in studies using respondent-driven sampling.

    PubMed

    Rotondi, Michael A

    2014-06-01

    Respondent-driven sampling (RDS) is an increasingly common sampling technique to recruit hidden populations. Statistical methods for RDS are not straightforward due to the correlation between individual outcomes and subject weighting; thus, analyses are typically limited to estimation of population proportions. This manuscript applies the method of variance estimates recovery (MOVER) to construct confidence intervals for effect measures such as risk difference (difference of proportions) or relative risk in studies using RDS. To illustrate the approach, MOVER is used to construct confidence intervals for differences in the prevalence of demographic characteristics between an RDS study and convenience study of injection drug users. MOVER is then applied to obtain a confidence interval for the relative risk between education levels and HIV seropositivity and current infection with syphilis, respectively. This approach provides a simple method to construct confidence intervals for effect measures in RDS studies. Since it only relies on a proportion and appropriate confidence limits, it can also be applied to previously published manuscripts.

  11. Nonparametric test of consistency between cosmological models and multiband CMB measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghamousa, Amir; Shafieloo, Arman, E-mail: amir@apctp.org, E-mail: shafieloo@kasi.re.kr

    2015-06-01

    We present a novel approach to test the consistency of the cosmological models with multiband CMB data using a nonparametric approach. In our analysis we calibrate the REACT (Risk Estimation and Adaptation after Coordinate Transformation) confidence levels associated with distances in function space (confidence distances) based on the Monte Carlo simulations in order to test the consistency of an assumed cosmological model with observation. To show the applicability of our algorithm, we confront Planck 2013 temperature data with concordance model of cosmology considering two different Planck spectra combination. In order to have an accurate quantitative statistical measure to compare betweenmore » the data and the theoretical expectations, we calibrate REACT confidence distances and perform a bias control using many realizations of the data. Our results in this work using Planck 2013 temperature data put the best fit ΛCDM model at 95% (∼ 2σ) confidence distance from the center of the nonparametric confidence set while repeating the analysis excluding the Planck 217 × 217 GHz spectrum data, the best fit ΛCDM model shifts to 70% (∼ 1σ) confidence distance. The most prominent features in the data deviating from the best fit ΛCDM model seems to be at low multipoles  18 < ℓ < 26 at greater than 2σ, ℓ ∼ 750 at ∼1 to 2σ and ℓ ∼ 1800 at greater than 2σ level. Excluding the 217×217 GHz spectrum the feature at ℓ ∼ 1800 becomes substantially less significance at ∼1 to 2σ confidence level. Results of our analysis based on the new approach we propose in this work are in agreement with other analysis done using alternative methods.« less

  12. Arctic Sea Ice Variability and Trends, 1979-2006

    NASA Technical Reports Server (NTRS)

    Parkinson, Claire L.; Cavalieri, Donald J.

    2008-01-01

    Analysis of Arctic sea ice extents derived from satellite passive-microwave data for the 28 years, 1979-2006 yields an overall negative trend of -45,100 +/- 4,600 km2/yr (-3.7 +/- 0.4%/decade) in the yearly averages, with negative ice-extent trends also occurring for each of the four seasons and each of the 12 months. For the yearly averages the largest decreases occur in the Kara and Barents Seas and the Arctic Ocean, with linear least squares slopes of -10,600 +/- 2,800 km2/yr (-7.4 +/- 2.0%/decade) and -10,100 +/- 2,200 km2/yr (-1.5 +/- 0.3%/decade), respectively, followed by Baffin Bay/Labrador Sea, with a slope of -8,000 +/- 2,000 km2/yr) -9.0 +/- 2.3%/decade), the Greenland Sea, with a slope of -7,000 +/- 1,400 km2/yr (-9.3 +/- 1.9%/decade), and Hudson Bay, with a slope of -4,500 +/- 900 km2/yr (-5.3 +/- 1.1%/decade). These are all statistically significant decreases at a 99% confidence level. The Seas of Okhotsk and Japan also have a statistically significant ice decrease, although at a 95% confidence level, and the three remaining regions, the Bering Sea, Canadian Archipelago, and Gulf of St. Lawrence, have negative slopes that are not statistically significant. The 28-year trends in ice areas for the Northern Hemisphere total are also statistically significant and negative in each season, each month, and for the yearly averages.

  13. Work-related stress, education and work ability among hospital nurses.

    PubMed

    Golubic, Rajna; Milosevic, Milan; Knezevic, Bojana; Mustajbegovic, Jadranka

    2009-10-01

    This paper is a report of a study conducted to determine which occupational stressors are present in nurses' working environment; to describe and compare occupational stress between two educational groups of nurses; to estimate which stressors and to what extent predict nurses' work ability; and to determine if educational level predicts nurses' work ability. Nurses' occupational stress adversely affects their health and nursing quality. Higher educational level has been shown to have positive effects on the preservation of good work ability. A cross-sectional study was conducted in 2006-2007. Questionnaires were distributed to a convenience sample of 1392 (59%) nurses employed at four university hospitals in Croatia (n = 2364). The response rate was 78% (n = 1086). Data were collected using the Occupational Stress Assessment Questionnaire and Work Ability Index Questionnaire. We identified six major groups of occupational stressors: 'Organization of work and financial issues', 'public criticism', 'hazards at workplace', 'interpersonal conflicts at workplace', 'shift work' and 'professional and intellectual demands'. Nurses with secondary school qualifications perceived Hazards at workplace and Shift work as statistically significantly more stressful than nurses a with college degree. Predictors statistically significantly related with low work ability were: Organization of work and financial issues (odds ratio = 1.69, 95% confidence interval 122-236), lower educational level (odds ratio = 1.69, 95% confidence interval 122-236) and older age (odds ratio = 1.07, 95% confidence interval 1.05-1.09). Hospital managers should develop strategies to address and improve the quality of working conditions for nurses in Croatian hospitals. Providing educational and career prospects can contribute to decreasing nurses' occupational stress levels, thus maintaining their work ability.

  14. Evaluating the use of simulation with beginning nursing students.

    PubMed

    Alfes, Celeste M

    2011-02-01

    The purpose of this quasi-experimental study was to evaluate and compare the effectiveness of simulation versus a traditional skills laboratory method in promoting self-confidence and satisfaction with learning among beginning nursing students. A single convenience sample of 63 first-semester baccalaureate nursing students learning effective comfort care measures were recruited to compare the two teaching methods. Students participating in the simulation experience were statistically more confident than students participating in the traditional group. There was a slight, nonsignificant difference in satisfaction with learning between the two groups. Bivariate analysis revealed a significant positive relationship between self-confidence and satisfaction. Students in both groups reported higher levels of self-confidence following the learning experiences. Findings may influence the development of simulation experiences for beginning nursing students and encourage the implementation of simulation as a strand from beginning to end in nursing curricula. Copyright 2011, SLACK Incorporated.

  15. Harnessing Multivariate Statistics for Ellipsoidal Data in Structural Geology

    NASA Astrophysics Data System (ADS)

    Roberts, N.; Davis, J. R.; Titus, S.; Tikoff, B.

    2015-12-01

    Most structural geology articles do not state significance levels, report confidence intervals, or perform regressions to find trends. This is, in part, because structural data tend to include directions, orientations, ellipsoids, and tensors, which are not treatable by elementary statistics. We describe a full procedural methodology for the statistical treatment of ellipsoidal data. We use a reconstructed dataset of deformed ooids in Maryland from Cloos (1947) to illustrate the process. Normalized ellipsoids have five degrees of freedom and can be represented by a second order tensor. This tensor can be permuted into a five dimensional vector that belongs to a vector space and can be treated with standard multivariate statistics. Cloos made several claims about the distribution of deformation in the South Mountain fold, Maryland, and we reexamine two particular claims using hypothesis testing: 1) octahedral shear strain increases towards the axial plane of the fold; 2) finite strain orientation varies systematically along the trend of the axial trace as it bends with the Appalachian orogen. We then test the null hypothesis that the southern segment of South Mountain is the same as the northern segment. This test illustrates the application of ellipsoidal statistics, which combine both orientation and shape. We report confidence intervals for each test, and graphically display our results with novel plots. This poster illustrates the importance of statistics in structural geology, especially when working with noisy or small datasets.

  16. Evaluation of PCR Systems for Field Screening of Bacillus anthracis

    PubMed Central

    Ozanich, Richard M.; Colburn, Heather A.; Victry, Kristin D.; Bartholomew, Rachel A.; Arce, Jennifer S.; Heredia-Langner, Alejandro; Jarman, Kristin; Kreuzer, Helen W.

    2017-01-01

    There is little published data on the performance of hand-portable polymerase chain reaction (PCR) systems that can be used by first responders to determine if a suspicious powder contains a potential biothreat agent. We evaluated 5 commercially available hand-portable PCR instruments for detection of Bacillus anthracis. We used a cost-effective, statistically based test plan to evaluate systems at performance levels ranging from 0.85-0.95 lower confidence bound (LCB) of the probability of detection (POD) at confidence levels of 80% to 95%. We assessed specificity using purified genomic DNA from 13 B. anthracis strains and 18 Bacillus near neighbors, potential interference with 22 suspicious powders that are commonly encountered in the field by first responders during suspected biothreat incidents, and the potential for PCR inhibition when B. anthracis spores were spiked into these powders. Our results indicate that 3 of the 5 systems achieved 0.95 LCB of the probability of detection with 95% confidence levels at test concentrations of 2,000 genome equivalents/mL (GE/mL), which is comparable to 2,000 spores/mL. This is more than sufficient sensitivity for screening visible suspicious powders. These systems exhibited no false-positive results or PCR inhibition with common suspicious powders and reliably detected B. anthracis spores spiked into these powders, though some issues with assay controls were observed. Our testing approach enables efficient performance testing using a statistically rigorous and cost-effective test plan to generate performance data that allow users to make informed decisions regarding the purchase and use of field biodetection equipment. PMID:28192050

  17. Confidence level in venipuncture and knowledge on causes of in vitro hemolysis among healthcare professionals

    PubMed Central

    Milutinović, Dragana; Andrijević, Ilija; Ličina, Milijana; Andrijević, Ljiljana

    2015-01-01

    Introduction This study aimed to assess confidence level of healthcare professionals in venipuncture and their knowledge on the possible causes of in vitro hemolysis. Materials and methods A sample of 94 healthcare professionals (nurses and laboratory technicians) participated in this survey study. A four-section questionnaire was used as a research instrument comprising general information for research participants, knowledge on possible causes of in vitro hemolysis due to type of material used and venipuncture technique and specimen handling, as well as assessment of healthcare professionals’ confidence level in their own ability to perform first and last venipuncture. Results The average score on the knowledge test was higher in nurses’ than in laboratory technicians (8.11 ± 1.7, and 7.4 ± 1.5, respectively). The difference in average scores was statistically significant (P = 0.035) and Cohen’s d in the range of 0.4 indicates that there is a moderate difference on the knowledge test among the health care workers. Only 11/94 of healthcare professionals recognized that blood sample collection from cannula and evacuated tube is method which contributes most to the occurrence of in vitro hemolysis, whereas most risk factors affecting occurrence of in vitro hemolysis during venipuncture were recognized. There were no significant differences in mean score on the knowledge test in relation to the confidence level in venipuncture (P = 0.551). Conclusion Confidence level at last venipuncture among both profiles of healthcare staff was very high, but they showed insufficient knowledge about possible factors affecting hemolysis due to materials used in venipuncture compared with factors due to venipuncture technique and handling of blood sample. PMID:26527124

  18. Level of confidence in venepuncture and knowledge in determining causes of blood sample haemolysis among clinical staff and phlebotomists.

    PubMed

    Makhumula-Nkhoma, Nellie; Whittaker, Vicki; McSherry, Robert

    2015-02-01

    To investigate the association between confidence level in venepuncture and knowledge in determining causes of blood sample haemolysis among clinical staff and phlebotomists. Various collection methods are used to perform venepuncture, also called phlebotomy, the act of drawing blood from a patient using a needle. The collection method used has an impact on preanalytical blood sample haemolysis. Haemolysis is the breakdown of red blood cells, which makes the sample unsuitable. Despite available evidence on the common causes, extensive literature search showed a lack of published evidence on the association of haemolysis with staff confidence and knowledge. A quantitative primary research design using survey method. A purposive sample of 290 clinical staff and phlebotomists conducting venepuncture in one North England hospital participated in this quantitative survey. A three-section web-based questionnaire comprising demographic profile, confidence and competence levels, and knowledge sections was used to collect data in 2012. The chi-squared test for independence was used to compare the distribution of responses for categorical data. anova was used to determine mean difference in the knowledge scores of staff with different confidence levels. Almost 25% clinical staff and phlebotomists participated in the survey. There was an increase in confidence at the last venepuncture among staff of all categories. While doctors' scores were higher compared with healthcare assistants', p ≤ 0·001, nurses' were of wide range and lowest. There was no statistically significant difference (at the 5% level) in the total knowledge scores and confidence level at the last venepuncture F(2,4·690) = 1·67, p = 0·31 among staff of all categories. Evidence-based measures are required to boost staff knowledge base of preanalytical blood sample haemolysis for standardised and quality service. Monitoring and evaluation of the training, conducting and monitoring haemolysis rate are equally crucial. Although the hospital is succeeding in providing regular training in venepuncture, this is only one aspect of quality. The process and outcome also need interventions. © 2014 John Wiley & Sons Ltd.

  19. Quantitative skills as a graduate learning outcome of university science degree programmes: student performance explored through theplanned-enacted-experiencedcurriculum model

    NASA Astrophysics Data System (ADS)

    Matthews, Kelly E.; Adams, Peter; Goos, Merrilyn

    2016-07-01

    Application of mathematical and statistical thinking and reasoning, typically referred to as quantitative skills, is essential for university bioscience students. First, this study developed an assessment task intended to gauge graduating students' quantitative skills. The Quantitative Skills Assessment of Science Students (QSASS) was the result, which examined 10 mathematical and statistical sub-topics. Second, the study established an evidential baseline of students' quantitative skills performance and confidence levels by piloting the QSASS with 187 final-year biosciences students at a research-intensive university. The study is framed within the planned-enacted-experienced curriculum model and contributes to science reform efforts focused on enhancing the quantitative skills of university graduates, particularly in the biosciences. The results found, on average, weak performance and low confidence on the QSASS, suggesting divergence between academics' intentions and students' experiences of learning quantitative skills. Implications for curriculum design and future studies are discussed.

  20. Perceived preparedness for physiatric specialization and future career goals of graduating postgraduate year IV residents during the 2004-2005 academic year.

    PubMed

    Raj, Vishwa S; Rintala, Diana H

    2007-12-01

    The purpose of this study was to evaluate trends among postgraduate year (PGY) IV physiatry residents, at the time of graduation from residency, in terms of their perceived experiences in the core clinical areas, confidence with procedural subspecialization, choice in career specialization, and desire to pursue clinical fellowship. Surveys were distributed to 386 PGY IV residents in physiatry at the end of the 2004-2005 academic year. Ninety-three residents (24%) completed responses in a confidential manner. Residents who were generally more confident in core clinical areas, as defined by the Self-Assessment Examination, and specialty prescription writing also believed themselves to be more prepared to practice these topics in their careers. Overall levels of confidence and perceived preparedness correlated positively with months of training and negatively with the belief in the need for postresidency fellowship training to incorporate these areas into clinical practice. Positive correlations also existed among perceived levels of preparedness in performing various physiatric procedures. Statistically significant differences in levels of confidence and preparedness existed among geographic regions when evaluating core physiatric subject matter. Fifty-six percent of residents who responded planned to pursue fellowship training, and a majority of residents intended to perform interventional procedures and musculoskeletal medicine in their practices. These results provide insight into how trainees perceive their current clinical education. With validation of measures for confidence and preparedness, this survey may be useful as an adjunct resource for residency programs to evaluate their trainees.

  1. Confidence intervals for the between-study variance in random-effects meta-analysis using generalised heterogeneity statistics: should we use unequal tails?

    PubMed

    Jackson, Dan; Bowden, Jack

    2016-09-07

    Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.

  2. Nigerian pharmacists’ self-perceived competence and confidence to plan and conduct pharmacy practice research

    PubMed Central

    Usman, Mohammad N.; Umar, Muhammad D.

    2018-01-01

    Background: Recent studies have revealed that pharmacists have interest in conducting research. However, lack of confidence is a major barrier. Objective: This study evaluated pharmacists’ self-perceived competence and confidence to plan and conduct health-related research. Method: This cross sectional study was conducted during the 89th Annual National Conference of the Pharmaceutical Society of Nigeria in November 2016. An adapted questionnaire was validated and administered to 200 pharmacist delegates during the conference. Result: Overall, 127 questionnaires were included in the analysis. At least 80% of the pharmacists had previous health-related research experience. Pharmacist’s competence and confidence scores were lowest for research skills such as: using software for statistical analysis, choosing and applying appropriate inferential statistical test and method, and outlining detailed statistical plan to be used in data analysis. Highest competence and confidence scores were observed for conception of research idea, literature search and critical appraisal of literature. Pharmacists with previous research experience had higher competence and confidence scores than those with no previous research experience (p<0.05). The only predictor of moderate-to-extreme self-competence and confidence was having at least one journal article publication during the last 5 years. Conclusion: Nigerian pharmacists indicated interest to participate in health-related research. However, self-competence and confidence to plan and conduct research were low. This was particularly so for skills related to statistical analysis. Training programs and building of Pharmacy Practice Research Network are recommended to enhance pharmacist’s research capacity. PMID:29619141

  3. Reducing "Math Anxiety" in College Algebra Courses Including Comparisons with Elementary Statistics Courses.

    ERIC Educational Resources Information Center

    Bankhead, Mike

    The high levels of anxiety, apprehension, and apathy of students in college algebra courses caused the instructor to create and test a variety of math teaching techniques designed to boost student confidence and enthusiasm in the subject. Overall, this proposal covers several different techniques, which have been evaluated by both students and the…

  4. A criterion for establishing life limits. [for Space Shuttle Main Engine service

    NASA Technical Reports Server (NTRS)

    Skopp, G. H.; Porter, A. A.

    1990-01-01

    The development of a rigorous statistical method that would utilize hardware-demonstrated reliability to evaluate hardware capability and provide ground rules for safe flight margin is discussed. A statistical-based method using the Weibull/Weibayes cumulative distribution function is described. Its advantages and inadequacies are pointed out. Another, more advanced procedure, Single Flight Reliability (SFR), determines a life limit which ensures that the reliability of any single flight is never less than a stipulated value at a stipulated confidence level. Application of the SFR method is illustrated.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John R.; Brooks, Dusty Marie

    In pressurized water reactors, the prevention, detection, and repair of cracks within dissimilar metal welds is essential to ensure proper plant functionality and safety. Weld residual stresses, which are difficult to model and cannot be directly measured, contribute to the formation and growth of cracks due to primary water stress corrosion cracking. Additionally, the uncertainty in weld residual stress measurements and modeling predictions is not well understood, further complicating the prediction of crack evolution. The purpose of this document is to develop methodology to quantify the uncertainty associated with weld residual stress that can be applied to modeling predictions andmore » experimental measurements. Ultimately, the results can be used to assess the current state of uncertainty and to build confidence in both modeling and experimental procedures. The methodology consists of statistically modeling the variation in the weld residual stress profiles using functional data analysis techniques. Uncertainty is quantified using statistical bounds (e.g. confidence and tolerance bounds) constructed with a semi-parametric bootstrap procedure. Such bounds describe the range in which quantities of interest, such as means, are expected to lie as evidenced by the data. The methodology is extended to provide direct comparisons between experimental measurements and modeling predictions by constructing statistical confidence bounds for the average difference between the two quantities. The statistical bounds on the average difference can be used to assess the level of agreement between measurements and predictions. The methodology is applied to experimental measurements of residual stress obtained using two strain relief measurement methods and predictions from seven finite element models developed by different organizations during a round robin study.« less

  6. Evidence-Based Practice Knowledge, Attitude, Access and Confidence: A comparison of dental hygiene and dental students.

    PubMed

    Santiago, Victoria; Cardenas, Melissa; Charles, Anne Laure; Hernandez, Estefany; Oyoyo, Udochukwu; Kwon, So Ran

    2018-04-01

    Purpose: The purpose of this study was to evaluate whether current educational strategies at a dental institution in the United States made a difference in dental hygiene (DNHY) and dental students' (D3) learning outcomes in the four domains of evidence-based practice (EBP), knowledge, attitude, accessing evidence, and confidence (KACE), following a 12-week research design course. Methods: All participants DNHY (n=19) and D3 (n=96) enrolled in the research design course at Loma Linda University completed a paper KACE survey distributed on the first day of class. Students completed the KACE survey once more at the end of the 12-week course. Pre- and post-survey results were compared both within and between the DNHY and D3 student groups to identify the learning outcomes in the four domains of EBP; knowledge, attitude, accessing evidence, and confidence in EBP. Descriptive statistics were conducted to profile all variables in the study; the level of significance was set at α=0.05. Results: All DNHY students (n=19) completed the pre and post KACE surveys; of the D3 (n=96) students enrolled in the course 82% (n=79) competed the post-survey. Comparison of the survey results showed that both DNHY and D3 students demonstrated statistically significant increases in their level of knowledge and attitude (p < 0.05) towards EBP. In the attitude domain, DNHY students indicated more positive attitudes towards EBP (p < 0.001) than their D3 student cohorts. Neither group demonstrated significant changes in confidence in applying EBP (p > 0.05). Conclusion: DNHY and D3 students increased their knowledge and developed more positive attitudes towards EBP following a 12-week research design course. Study results identify improvement areas for EBP knowledge acquisition including determining levels of evidence, analysis of study results, and evaluating the appropriateness of research study designs through the use of validated EBP survey instrument. Copyright © 2018 The American Dental Hygienists’ Association.

  7. The size of a pilot study for a clinical trial should be calculated in relation to considerations of precision and efficiency.

    PubMed

    Sim, Julius; Lewis, Martyn

    2012-03-01

    To investigate methods to determine the size of a pilot study to inform a power calculation for a randomized controlled trial (RCT) using an interval/ratio outcome measure. Calculations based on confidence intervals (CIs) for the sample standard deviation (SD). Based on CIs for the sample SD, methods are demonstrated whereby (1) the observed SD can be adjusted to secure the desired level of statistical power in the main study with a specified level of confidence; (2) the sample for the main study, if calculated using the observed SD, can be adjusted, again to obtain the desired level of statistical power in the main study; (3) the power of the main study can be calculated for the situation in which the SD in the pilot study proves to be an underestimate of the true SD; and (4) an "efficient" pilot size can be determined to minimize the combined size of the pilot and main RCT. Trialists should calculate the appropriate size of a pilot study, just as they should the size of the main RCT, taking into account the twin needs to demonstrate efficiency in terms of recruitment and to produce precise estimates of treatment effect. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fresquez, Philip R.

    Field mice are effective indicators of contaminant presence. This paper reports the concentrations of various radionuclides, heavy metals, polychlorinated biphenyls, high explosives, perchlorate, and dioxin/furans in field mice (mostly deer mice) collected from regional background areas in northern New Mexico. These data, represented as the regional statistical reference level (the mean plus three standard deviations = 99% confidence level), are used to compare with data from field mice collected from areas potentially impacted by Laboratory operations, as per the Environmental Surveillance Program at Los Alamos National Laboratory.

  9. On the Statistical Analysis of X-ray Polarization Measurements

    NASA Technical Reports Server (NTRS)

    Strohmayer, T. E.; Kallman, T. R.

    2013-01-01

    In many polarimetry applications, including observations in the X-ray band, the measurement of a polarization signal can be reduced to the detection and quantification of a deviation from uniformity of a distribution of measured angles of the form alpha plus beta cosine (exp 2)(phi - phi(sub 0) (0 (is) less than phi is less than pi). We explore the statistics of such polarization measurements using both Monte Carlo simulations as well as analytic calculations based on the appropriate probability distributions. We derive relations for the number of counts required to reach a given detection level (parameterized by beta the "number of sigma's" of the measurement) appropriate for measuring the modulation amplitude alpha by itself (single interesting parameter case) or jointly with the position angle phi (two interesting parameters case). We show that for the former case when the intrinsic amplitude is equal to the well known minimum detectable polarization (MDP) it is, on average, detected at the 3sigma level. For the latter case, when one requires a joint measurement at the same confidence level, then more counts are needed, by a factor of approximately equal to 2.2, than that required to achieve the MDP level. We find that the position angle uncertainty at 1sigma confidence is well described by the relation sigma(sub pi) equals 28.5(degrees) divided by beta.

  10. On-Call Communication in Orthopaedic Trauma: "A Picture Is Worth a Thousand Words"--A Survey of OTA Members.

    PubMed

    Molina, Cesar S; Callan, Alexandra K; Burgos, Eduardo J; Mir, Hassan R

    2015-05-01

    To quantify the effects of varying clinical communication styles (verbal and pictorial) on the ability of orthopaedic trauma surgeons in understanding an injury and formulate an initial management plan. A Research Electronic Data Capture survey was e-mailed to all OTA members. Respondents quantified (5-point Likert scale) how confident they felt understanding an injury and establishing an initial management plan based on the information provided for 5 common orthopaedic trauma scenarios. Three verbal descriptions were created for each scenario and categorized as limited, moderate, or detailed. The questions were repeated with the addition of a radiographic image and then repeated a third time including a clinical photograph. Statistical evaluation consisted of descriptive statistics and Kruskal-Wallis analyses using STATA (version 12.0). Of the 221 respondents, there were a total of 95 who completed the entire survey. Nearly all were currently taking call (92/95 = 96.8%) and the majority were fellowship trained (79/95 = 83.2%). Most practice at a level I trauma center (58/95 = 61.1%) and work with orthopaedic residents (62/95 = 65.3%). There was a significant increase in confidence scores between a limited, moderate, and detailed description in all clinical scenarios for understanding the injury and establishing an initial management plan (P < 0.05). There was a significant difference in confidence scores between all 3 types of evidence presented (verbal, verbal + x-ray, verbal + x-ray + photograph) in both understanding and managing the injury for limited and moderate descriptions (P < 0.001). No differences were seen when adding pictorial information to the detailed verbal description. When comparing confidence scores between a detailed description without images and a limited description that includes radiographs and a photograph, no difference in confidence levels was seen in 7 of the 10 scenarios (P > 0.05). The addition of images in the form of radiographs and/or clinical photographs greatly improves the confidence of orthopaedic trauma surgeons in understanding injuries and establishing initial management plans with limited verbal information (P < 0.001). The inclusion of x-rays and photographs raises the confidence for understanding and management with limited verbal information to the level of a detailed verbal description in most scenarios. Mobile technology allows for easy secure transfer of images that can make up for the lack of available information from limited verbal descriptions because of the knowledge base of communicating providers.

  11. Increasing Confidence in a Statistics Course: Assessing Students' Beliefs and Using the Data to Design Curriculum with Students

    ERIC Educational Resources Information Center

    Huchting, Karen

    2013-01-01

    Students were involved in the curriculum design of a statistics course. They completed a pre-assessment of their confidence and skills using quantitative methods and statistics. Scores were aggregated, and anonymous data were shown on the first night of class. Using these data, the course was designed, demonstrating evidence-based instructional…

  12. Socioeconomic status, statistical confidence, and patient-provider communication: an analysis of the Health Information National Trends Survey (HINTS 2007).

    PubMed

    Smith, Samuel G; Wolf, Michael S; von Wagner, Christian

    2010-01-01

    The increasing trend of exposing patients seeking health advice to numerical information has the potential to adversely impact patient-provider relationships especially among individuals with low literacy and numeracy skills. We used the HINTS 2007 to provide the first large scale study linking statistical confidence (as a marker of subjective numeracy) to demographic variables and a health-related outcome (in this case the quality of patient-provider interactions). A cohort of 7,674 individuals answered sociodemographic questions, a question on how confident they were in understanding medical statistics, a question on preferences for words or numbers in risk communication, and a measure of patient-provider interaction quality. Over thirty-seven percent (37.4%) of individuals lacked confidence in their ability to understand medical statistics. This was particularly prevalent among the elderly, low income, low education, and non-White ethnic minority groups. Individuals who lacked statistical confidence demonstrated clear preferences for having risk-based information presented with words rather than numbers and were 67% more likely to experience a poor patient-provider interaction, after controlling for gender, ethnicity, insurance status, the presence of a regular health care professional, and the language of the telephone interview. We will discuss the implications of our findings for health care professionals.

  13. Education level and inequalities in stroke reperfusion therapy: observations in the Swedish stroke register.

    PubMed

    Stecksén, Anna; Glader, Eva-Lotta; Asplund, Kjell; Norrving, Bo; Eriksson, Marie

    2014-09-01

    Previous studies have revealed inequalities in stroke treatment based on demographics, hospital type, and region. We used the Swedish Stroke Register (Riksstroke) to test whether patient education level is associated with reperfusion (either or both of thrombolysis and thrombectomy) treatment. We included 85 885 patients with ischemic stroke aged 18 to 80 years registered in Riksstroke between 2003 and 2009. Education level was retrieved from Statistics Sweden, and thrombolysis, thrombectomy, patient, and hospital data were obtained from Riksstroke. We used multivariable logistic regression to analyze the association between reperfusion therapy and patient education. A total of 3649 (4.2%) of the patients received reperfusion therapy. University-educated patients were more likely to be treated (5.5%) than patients with secondary (4.6%) or primary education (3.6%; P<0.001). The inequality associated with education was still present after adjustment for patient characteristics; university education odds ratio, 1.14; 95% confidence interval, 1.03 to 1.26 and secondary education odds ratio, 1.08; 95% confidence interval, 1.00 to 1.17 compared with primary education. Higher hospital specialization level was also associated with higher reperfusion levels (P<0.001). In stratified multivariable analyses by hospital type, significant treatment differences by education level existed only among large nonuniversity hospitals (university education odds ratio, 1.20; 95% confidence interval, 1.04-1.40; secondary education odds ratio, 1.14; 95% confidence interval, 1.01-1.29). We demonstrated a social stratification in reperfusion, partly explained by patient characteristics and the local hospital specialization level. Further studies should address treatment delays, stroke knowledge, and means to improve reperfusion implementation in less specialized hospitals. © 2014 American Heart Association, Inc.

  14. [Establishment of the mathematic model of total quantum statistical moment standard similarity for application to medical theoretical research].

    PubMed

    He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian

    2013-09-01

    The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents with various solvent extracts. The TQSMSS can characterize the sample similarity, by which we can quantitate the correct probability with the test of power under to make positive and negative conclusions no matter the samples come from same population under confident coefficient a or not, by which we can realize an analysis at both macroscopic and microcosmic levels, as an important similar analytical method for medical theoretical research.

  15. [Confidence interval or p-value--similarities and differences between two important methods of statistical inference of quantitative studies].

    PubMed

    Harari, Gil

    2014-01-01

    Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.

  16. Dose Assessment of Los Alamos National Laboratory-Derived Residual Radionuclides in Soils within C Tracts (C-2, C-3, and C-4) for Land Transfer Decisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillis, Jessica M.; Whicker, Jeffrey J.

    2016-01-26

    Three separate Sampling and Analysis Plans (SAPs) were prepared for tracts C-2, C-3, and C-4. The objective of sampling was to confirm, within the stated statistical confidence limits, that the mean levels of potential radioactive residual contamination in soils in the C Tracts are documented, in appropriate units, and are below the 15 mrem/y (150 μSv/y) Screening Action Levels (SALs). Results show that radionuclide concentration upper-bound 95% confidence levels were close to background levels, with the exception of Pu-239 and Cs-137 being slightly elevated above background, and all measurements were below the ALs and meet the real property release criteriamore » for future construction or recreational use. A follow-up ALARA analysis showed that the costs of cleanup of the soil in areas of elevated concentration and confirmatory sampling would far exceed any benefit from dose reduction.« less

  17. Surgical resident supervision in the operating room and outcomes of care in Veterans Affairs hospitals.

    PubMed

    Itani, Kamal M F; DePalma, Ralph G; Schifftner, Tracy; Sanders, Karen M; Chang, Barbara K; Henderson, William G; Khuri, Shukri F

    2005-11-01

    There has been concern that a reduced level of surgical resident supervision in the operating room (OR) is correlated with worse patient outcomes. Until September 2004, Veterans' Affairs (VA) hospitals entered in the surgical record level 3 supervision on every surgical case when the attending physician was available but not physically present in the OR or the OR suite. In this study, we assessed the impact of level 3 on risk-adjusted morbidity and mortality in the VA system. Surgical cases entered into the National Surgical Quality Improvement Program database between 1998 and 2004, from 99 VA teaching facilities, were included in a logistic regression analysis for each year. Level 3 versus all other levels of supervision were forced into the model, and patient characteristics then were selected stepwise to arrive at a final model. Confidence limits for the odds ratios were calculated by profile likelihood. A total of 610,660 cases were available for analysis. Thirty-day mortality and morbidity rates were reported in 14,441 (2.36%) and 63,079 (10.33%) cases, respectively. Level 3 supervision decreased from 8.72% in 1998 to 2.69% in 2004. In the logistic regression analysis, the odds ratios for mortality for level 3 ranged from .72 to 1.03. Only in the year 2000 were the odds ratio for mortality statistically significant at the .05 level (odds ratio, .72; 95% confidence interval, .594-.858). For morbidity, the odds ratios for level 3 supervision ranged from .66 to 1.01, and all odds ratios except for the year 2004 were statistically significant. Between 1998 and 2004, the level of resident supervision in the OR did not affect clinical outcomes adversely for surgical patients in the VA teaching hospitals.

  18. Evaluation of PCR Systems for Field Screening of Bacillus anthracis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozanich, Richard M.; Colburn, Heather A.; Victry, Kristin D.

    There is little published data on the performance of hand-portable polymerase chain reaction (PCR) instruments that could be used by first responders to determine if a suspicious powder contains a potential biothreat agent. We evaluated five commercially available hand-portable PCR instruments for detection of Bacillus anthracis (Ba). We designed a cost-effective, statistically-based test plan that allows instruments to be evaluated at performance levels ranging from 0.85-0.95 lower confidence bound (LCB) on the probability of detection (POD) at confidence levels of 80-95%. We assessed specificity using purified genomic DNA from 13 Ba strains and 18 Bacillus near neighbors, interference with 22more » common hoax powders encountered in the field, and PCR inhibition when Ba spores were spiked into these powders. Our results indicated that three of the five instruments achieved >0.95 LCB on the POD with 95% confidence at test concentrations of 2,000 genome equivalents/mL (comparable to 2,000 spores/mL), displaying more than sufficient sensitivity for screening suspicious powders. These instruments exhibited no false positive results or PCR inhibition with common hoax powders, and reliably detected Ba spores spiked into common hoax powders, though some issues with instrument controls were observed. Our testing approach enables efficient instrument performance testing to a statistically rigorous and cost-effective test plan to generate performance data that will allow users to make informed decisions regarding the purchase and use of biodetection equipment in the field.« less

  19. Distinguishing Man from Molecules: The Distinctiveness of Medical Concepts at Different Levels of Description

    PubMed Central

    Cole, William G.; Michael, Patricia; Blois, Marsden S.

    1987-01-01

    A computer program was created to use information about the statistical distribution of words in journal abstracts to make probabilistic judgments about the level of description (e.g. molecular, cell, organ) of medical text. Statistical analysis of 7,409 journal abstracts taken from three medical journals representing distinct levels of description revealed that many medical words seem to be highly specific to one or another level of description. For example, the word adrenoreceptors occurred only in the American Journal of Physiology, never in Journal of Biological Chemistry or in Journal of American Medical Association. Such highly specific words occured so frequently that the automatic classification program was able to classify correctly 45 out of 45 test abstracts, with 100% confidence. These findings are interpreted in terms of both a theory of the structure of medical knowledge and the pragmatics of automatic classification.

  20. "I'm Not a Natural Mathematician": Inquiry-Based Learning, Constructive Alignment and Introductory Quantitative Social Science

    ERIC Educational Resources Information Center

    Clark, Tom; Foster, Liam

    2017-01-01

    There is continuing concern about the paucity of social science graduates who have the quantitative skills required by academia and industry. Not only do students often lack the confidence to explore, and use, statistical techniques, the dominance of qualitative research in many disciplines has also often constrained programme-level integration of…

  1. Integrating Multiple Knowledge Sources for Utterance-Level Confidence Annotation in the CMU Communicator Spoken Dialog System

    DTIC Science & Technology

    2002-11-01

    Wilson, Rong Zhang for their collaboration on the first part of this work. We would also like to thank Tania Liebowitz and Tina Bennett for their help in...Regression”, Wiley Seried in Prob- ability and Statistics, 2000 [32] Walker M.A., Litman D.J., Kamm C.A., Abella A. “PARADISE: A Framework for Evaluating

  2. Perceptions of the use of critical thinking teaching methods.

    PubMed

    Kowalczyk, Nina; Hackworth, Ruth; Case-Smith, Jane

    2012-01-01

    To identify the perceived level of competence in teaching and assessing critical thinking skills and the difficulties facing radiologic science program directors in implementing student-centered teaching methods. A total of 692 program directors received an invitation to complete an electronic survey soliciting information regarding the importance of critical thinking skills, their confidence in applying teaching methods and assessing student performance, and perceived obstacles. Statistical analysis included descriptive data, correlation coefficients, and ANOVA. Responses were received from 317 participants indicating program directors perceive critical thinking to be an essential element in the education of the student; however, they identified several areas for improvement. A high correlation was identified between the program directors' perceived level of skill and their confidence in critical thinking, and between their perceived level of skill and ability to assess the students' critical thinking. Key barriers to implementing critical thinking teaching strategies were identified. Program directors value the importance of implementing critical thinking teaching methods and perceive a need for professional development in critical thinking educational methods. Regardless of the type of educational institution in which the academic program is located, the level of education held by the program director was a significant factor regarding perceived confidence in the ability to model critical thinking skills and the ability to assess student critical thinking skills.

  3. A hybrid Q-learning sine-cosine-based strategy for addressing the combinatorial test suite minimization problem

    PubMed Central

    Zamli, Kamal Z.; Din, Fakhrud; Bures, Miroslav

    2018-01-01

    The sine-cosine algorithm (SCA) is a new population-based meta-heuristic algorithm. In addition to exploiting sine and cosine functions to perform local and global searches (hence the name sine-cosine), the SCA introduces several random and adaptive parameters to facilitate the search process. Although it shows promising results, the search process of the SCA is vulnerable to local minima/maxima due to the adoption of a fixed switch probability and the bounded magnitude of the sine and cosine functions (from -1 to 1). In this paper, we propose a new hybrid Q-learning sine-cosine- based strategy, called the Q-learning sine-cosine algorithm (QLSCA). Within the QLSCA, we eliminate the switching probability. Instead, we rely on the Q-learning algorithm (based on the penalty and reward mechanism) to dynamically identify the best operation during runtime. Additionally, we integrate two new operations (Lévy flight motion and crossover) into the QLSCA to facilitate jumping out of local minima/maxima and enhance the solution diversity. To assess its performance, we adopt the QLSCA for the combinatorial test suite minimization problem. Experimental results reveal that the QLSCA is statistically superior with regard to test suite size reduction compared to recent state-of-the-art strategies, including the original SCA, the particle swarm test generator (PSTG), adaptive particle swarm optimization (APSO) and the cuckoo search strategy (CS) at the 95% confidence level. However, concerning the comparison with discrete particle swarm optimization (DPSO), there is no significant difference in performance at the 95% confidence level. On a positive note, the QLSCA statistically outperforms the DPSO in certain configurations at the 90% confidence level. PMID:29771918

  4. A hybrid Q-learning sine-cosine-based strategy for addressing the combinatorial test suite minimization problem.

    PubMed

    Zamli, Kamal Z; Din, Fakhrud; Ahmed, Bestoun S; Bures, Miroslav

    2018-01-01

    The sine-cosine algorithm (SCA) is a new population-based meta-heuristic algorithm. In addition to exploiting sine and cosine functions to perform local and global searches (hence the name sine-cosine), the SCA introduces several random and adaptive parameters to facilitate the search process. Although it shows promising results, the search process of the SCA is vulnerable to local minima/maxima due to the adoption of a fixed switch probability and the bounded magnitude of the sine and cosine functions (from -1 to 1). In this paper, we propose a new hybrid Q-learning sine-cosine- based strategy, called the Q-learning sine-cosine algorithm (QLSCA). Within the QLSCA, we eliminate the switching probability. Instead, we rely on the Q-learning algorithm (based on the penalty and reward mechanism) to dynamically identify the best operation during runtime. Additionally, we integrate two new operations (Lévy flight motion and crossover) into the QLSCA to facilitate jumping out of local minima/maxima and enhance the solution diversity. To assess its performance, we adopt the QLSCA for the combinatorial test suite minimization problem. Experimental results reveal that the QLSCA is statistically superior with regard to test suite size reduction compared to recent state-of-the-art strategies, including the original SCA, the particle swarm test generator (PSTG), adaptive particle swarm optimization (APSO) and the cuckoo search strategy (CS) at the 95% confidence level. However, concerning the comparison with discrete particle swarm optimization (DPSO), there is no significant difference in performance at the 95% confidence level. On a positive note, the QLSCA statistically outperforms the DPSO in certain configurations at the 90% confidence level.

  5. Confidence Interval Coverage for Cohen's Effect Size Statistic

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2006-01-01

    Kelley compared three methods for setting a confidence interval (CI) around Cohen's standardized mean difference statistic: the noncentral-"t"-based, percentile (PERC) bootstrap, and biased-corrected and accelerated (BCA) bootstrap methods under three conditions of nonnormality, eight cases of sample size, and six cases of population…

  6. The Role of Residential Early Parenting Services in Increasing Parenting Confidence in Mothers with A History of Infertility.

    PubMed

    Khajehei, Marjan; Finch, Lynette

    2016-01-01

    Mothers with a history of infertility may experience parenting difficulties and challenges. This study was conducted to investigate the role of residential early parenting services in increasing parenting confidence in mothers with a history of infertility. This was a retrospective chart review study using the quantitative data from the clients attending the Karitane Residential Units and Parenting Services (known as Karitane RUs) during 2013. Parenting confidence (using Karitane Parenting Confidence Scale-KPCS), depression, demographics, reproductive and medical history, as well as child's information were assessed from a sample of 27 mothers who had a history of infertility and who attended the Karitane RUs for support and assistance. The data were analyzed using SPSS version 19. More than half of the women (59.3%) reported a relatively low level of parenting confidence on the day of admission. The rate of low parenting confidence, however, dropped to 22.2% after receiving 4-5 days support and training in the Karitane RUs. The mean score of the KPCS increased from 36.9 ± 5.6 before the intervention to 41.1 ± 3.4 after the intervention, indicating an improvement in the parenting confidence of the mothers after attending the Karitane RUs (P<0.0001). No statistically significant association was found between maternal low parenting confidence with parental demographics (including age, country of birth, and employment status), a history of help-seeking, symptoms of depression, as well as child's information [including gender, age, siblings, diagnosis of gastroesophageal reflux disease (GORD) and use of medication]. Having a child after a period of infertility can be a stressful experience for some mothers. This can result in low parenting confidence and affect parent-child attachment. Our findings emphasized on the role of the residential early parenting services in promoting the level of parenting confidence and highlighted the need for early recognition and referral of the mothers with a history of infertility to such centers.

  7. An experiment on the impact of a neonicotinoid pesticide on honeybees: the value of a formal analysis of the data.

    PubMed

    Schick, Robert S; Greenwood, Jeremy J D; Buckland, Stephen T

    2017-01-01

    We assess the analysis of the data resulting from a field experiment conducted by Pilling et al. (PLoS ONE. doi: 10.1371/journal.pone.0077193, 5) on the potential effects of thiamethoxam on honeybees. The experiment had low levels of replication, so Pilling et al. concluded that formal statistical analysis would be misleading. This would be true if such an analysis merely comprised tests of statistical significance and if the investigators concluded that lack of significance meant little or no effect. However, an analysis that includes estimation of the size of any effects-with confidence limits-allows one to reach conclusions that are not misleading and that produce useful insights. For the data of Pilling et al., we use straightforward statistical analysis to show that the confidence limits are generally so wide that any effects of thiamethoxam could have been large without being statistically significant. Instead of formal analysis, Pilling et al. simply inspected the data and concluded that they provided no evidence of detrimental effects and from this that thiamethoxam poses a "low risk" to bees. Conclusions derived from the inspection of the data were not just misleading in this case but also are unacceptable in principle, for if data are inadequate for a formal analysis (or only good enough to provide estimates with wide confidence intervals), then they are bound to be inadequate as a basis for reaching any sound conclusions. Given that the data in this case are largely uninformative with respect to the treatment effect, any conclusions reached from such informal approaches can do little more than reflect the prior beliefs of those involved.

  8. Emergency department patient satisfaction survey in Imam Reza Hospital, Tabriz, Iran

    PubMed Central

    2011-01-01

    Introduction Patient satisfaction is an important indicator of the quality of care and service delivery in the emergency department (ED). The objective of this study was to evaluate patient satisfaction with the Emergency Department of Imam Reza Hospital in Tabriz, Iran. Methods This study was carried out for 1 week during all shifts. Trained researchers used the standard Press Ganey questionnaire. Patients were asked to complete the questionnaire prior to discharge. The study questionnaire included 30 questions based on a Likert scale. Descriptive and analytical statistics were used throughout data analysis in a number of ways using SPSS version 13. Results Five hundred patients who attended our ED were included in this study. The highest satisfaction rates were observed in the terms of physicians' communication with patients (82.5%), security guards' courtesy (78.3%) and nurses' communication with patients (78%). The average waiting time for the first visit to a physician was 24 min 15 s. The overall satisfaction rate was dependent on the mean waiting time. The mean waiting time for a low rate of satisfaction was 47 min 11 s with a confidence interval of (19.31, 74.51), and for very good level of satisfaction it was 14 min 57 s with a (10.58, 18.57) confidence interval. Approximately 63% of the patients rated their general satisfaction with the emergency setting as good or very good. On the whole, the patient satisfaction rate at the lowest level was 7.7 with a confidence interval of (5.1, 10.4), and at the low level it was 5.8% with a confidence interval of (3.7, 7.9). The rate of satisfaction for the mediocre level was 23.3 with a confidence interval of (19.1, 27.5); for the high level of satisfaction it was 28.3 with a confidence interval of (22.9, 32.8), and for the very high level of satisfaction, this rate was 32.9% with a confidence interval of (28.4, 37.4). Conclusion The study findings indicated the need for evidence-based interventions in emergency care services in areas such as medical care, nursing care, courtesy of staff, physical comfort and waiting time. Efforts should focus on shortening waiting intervals and improving patients' perceptions about waiting in the ED, and also improving the overall cleanliness of the emergency room. PMID:21407998

  9. WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.

    PubMed

    Grech, Victor

    2018-03-01

    The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Universal Design and Disability: Assessing Faculty Beliefs, Knowledge, and Confidence in Universal Design for Instruction

    ERIC Educational Resources Information Center

    Hartsoe, Joseph K.; Barclay, Susan R.

    2017-01-01

    The purpose of this study was to investigate faculty belief, knowledge, and confidence in the principles of Universal Design for Instruction (UDI). Results yielded statistically significant correlations between participant's belief and knowledge of the principles of UDI. Furthermore, findings yielded statistically significant differences between…

  11. Do Orthopaedic Surgeons Acknowledge Uncertainty?

    PubMed

    Teunis, Teun; Janssen, Stein; Guitton, Thierry G; Ring, David; Parisien, Robert

    2016-06-01

    Much of the decision-making in orthopaedics rests on uncertain evidence. Uncertainty is therefore part of our normal daily practice, and yet physician uncertainty regarding treatment could diminish patients' health. It is not known if physician uncertainty is a function of the evidence alone or if other factors are involved. With added experience, uncertainty could be expected to diminish, but perhaps more influential are things like physician confidence, belief in the veracity of what is published, and even one's religious beliefs. In addition, it is plausible that the kind of practice a physician works in can affect the experience of uncertainty. Practicing physicians may not be immediately aware of these effects on how uncertainty is experienced in their clinical decision-making. We asked: (1) Does uncertainty and overconfidence bias decrease with years of practice? (2) What sociodemographic factors are independently associated with less recognition of uncertainty, in particular belief in God or other deity or deities, and how is atheism associated with recognition of uncertainty? (3) Do confidence bias (confidence that one's skill is greater than it actually is), degree of trust in the orthopaedic evidence, and degree of statistical sophistication correlate independently with recognition of uncertainty? We created a survey to establish an overall recognition of uncertainty score (four questions), trust in the orthopaedic evidence base (four questions), confidence bias (three questions), and statistical understanding (six questions). Seven hundred six members of the Science of Variation Group, a collaboration that aims to study variation in the definition and treatment of human illness, were approached to complete our survey. This group represents mainly orthopaedic surgeons specializing in trauma or hand and wrist surgery, practicing in Europe and North America, of whom the majority is involved in teaching. Approximately half of the group has more than 10 years of experience. Two hundred forty-two (34%) members completed the survey. We found no differences between responders and nonresponders. Each survey item measured its own trait better than any of the other traits. Recognition of uncertainty (0.70) and confidence bias (0.75) had relatively high Cronbach alpha levels, meaning that the questions making up these traits are closely related and probably measure the same construct. This was lower for statistical understanding (0.48) and trust in the orthopaedic evidence base (0.37). Subsequently, combining each trait's individual questions, we calculated a 0 to 10 score for each trait. The mean recognition of uncertainty score was 3.2 ± 1.4. Recognition of uncertainty in daily practice did not vary by years in practice (0-5 years, 3.2 ± 1.3; 6-10 years, 2.9 ± 1.3; 11-20 years, 3.2 ± 1.4; 21-30 years, 3.3 ± 1.6 years; p = 0.51), but overconfidence bias did correlate with years in practice (0-5 years, 6.2 ± 1.4; 6-10 years, 7.1 ± 1.3; 11-20 years, 7.4 ± 1.4; 21-30 years, 7.1 ± 1.2 years; p < 0.001). Accounting for a potential interaction of variables using multivariable analysis, less recognition of uncertainty was independently but weakly associated with working in a multispecialty group compared with academic practice (β regression coefficient, -0.53; 95% confidence interval [CI], -1.0 to -0.055; partial R(2), 0.021; p = 0.029), belief in God or any other deity/deities (β, -0.57; 95% CI, -1.0 to -0.11; partial R(2), 0.026; p = 0.015), greater confidence bias (β, -0.26; 95% CI, -0.37 to -0.14; partial R(2), 0.084; p < 0.001), and greater trust in the orthopaedic evidence base (β, -0.16; 95% CI, -0.26 to -0.058; partial R(2), 0.040; p = 0.002). Better statistical understanding was independently, and more strongly, associated with greater recognition of uncertainty (β, 0.25; 95% CI, 0.17-0.34; partial R(2), 0.13; p < 0.001). Our full model accounted for 29% of the variability in recognition of uncertainty (adjusted R(2), 0.29). The relatively low levels of uncertainty among orthopaedic surgeons and confidence bias seem inconsistent with the paucity of definitive evidence. If patients want to be informed of the areas of uncertainty and surgeon-to-surgeon variation relevant to their care, it seems possible that a low recognition of uncertainty and surgeon confidence bias might hinder adequately informing patients, informed decisions, and consent. Moreover, limited recognition of uncertainty is associated with modifiable factors such as confidence bias, trust in orthopaedic evidence base, and statistical understanding. Perhaps improved statistical teaching in residency, journal clubs to improve the critique of evidence and awareness of bias, and acknowledgment of knowledge gaps at courses and conferences might create awareness about existing uncertainties. Level 1, prognostic study.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nunes, Rafael C.; Abreu, Everton M.C.; Neto, Jorge Ananias

    Based on the relationship between thermodynamics and gravity we propose, with the aid of Verlinde's formalism, an alternative interpretation of the dynamical evolution of the Friedmann-Robertson-Walker Universe. This description takes into account the entropy and temperature intrinsic to the horizon of the universe due to the information holographically stored there through non-gaussian statistical theories proposed by Tsallis and Kaniadakis. The effect of these non-gaussian statistics in the cosmological context is to change the strength of the gravitational constant. In this paper, we consider the w CDM model modified by the non-gaussian statistics and investigate the compatibility of these non-gaussian modificationmore » with the cosmological observations. In order to analyze in which extend the cosmological data constrain these non-extensive statistics, we will use type Ia supernovae, baryon acoustic oscillations, Hubble expansion rate function and the linear growth of matter density perturbations data. We show that Tsallis' statistics is favored at 1σ confidence level.« less

  13. Two Different Views on the World Around Us: The World of Uniformity versus Diversity.

    PubMed

    Kwon, JaeHwan; Nayakankuppam, Dhananjay

    2016-01-01

    We propose that when individuals believe in fixed traits of personality (entity theorists), they are likely to expect a world of "uniformity." As such, they easily infer a population statistic from a small sample of data with confidence. In contrast, individuals who believe in malleable traits of personality (incremental theorists) are likely to presume a world of "diversity," such that they "hesitate" to infer a population statistic from a similarly sized sample. In four laboratory experiments, we found that compared to incremental theorists, entity theorists estimated a population mean from a sample with a greater level of confidence (Studies 1a and 1b), expected more homogeneity among the entities within a population (Study 2), and perceived an extreme value to be more indicative of an outlier (Study 3). These results suggest that individuals are likely to use their implicit self-theory orientations (entity theory versus incremental theory) to see a population in general as a constitution either of homogeneous or heterogeneous entities.

  14. Dynamic association rules for gene expression data analysis.

    PubMed

    Chen, Shu-Chuan; Tsai, Tsung-Hsien; Chung, Cheng-Han; Li, Wen-Hsiung

    2015-10-14

    The purpose of gene expression analysis is to look for the association between regulation of gene expression levels and phenotypic variations. This association based on gene expression profile has been used to determine whether the induction/repression of genes correspond to phenotypic variations including cell regulations, clinical diagnoses and drug development. Statistical analyses on microarray data have been developed to resolve gene selection issue. However, these methods do not inform us of causality between genes and phenotypes. In this paper, we propose the dynamic association rule algorithm (DAR algorithm) which helps ones to efficiently select a subset of significant genes for subsequent analysis. The DAR algorithm is based on association rules from market basket analysis in marketing. We first propose a statistical way, based on constructing a one-sided confidence interval and hypothesis testing, to determine if an association rule is meaningful. Based on the proposed statistical method, we then developed the DAR algorithm for gene expression data analysis. The method was applied to analyze four microarray datasets and one Next Generation Sequencing (NGS) dataset: the Mice Apo A1 dataset, the whole genome expression dataset of mouse embryonic stem cells, expression profiling of the bone marrow of Leukemia patients, Microarray Quality Control (MAQC) data set and the RNA-seq dataset of a mouse genomic imprinting study. A comparison of the proposed method with the t-test on the expression profiling of the bone marrow of Leukemia patients was conducted. We developed a statistical way, based on the concept of confidence interval, to determine the minimum support and minimum confidence for mining association relationships among items. With the minimum support and minimum confidence, one can find significant rules in one single step. The DAR algorithm was then developed for gene expression data analysis. Four gene expression datasets showed that the proposed DAR algorithm not only was able to identify a set of differentially expressed genes that largely agreed with that of other methods, but also provided an efficient and accurate way to find influential genes of a disease. In the paper, the well-established association rule mining technique from marketing has been successfully modified to determine the minimum support and minimum confidence based on the concept of confidence interval and hypothesis testing. It can be applied to gene expression data to mine significant association rules between gene regulation and phenotype. The proposed DAR algorithm provides an efficient way to find influential genes that underlie the phenotypic variance.

  15. Modeling and Recovery of Iron (Fe) from Red Mud by Coal Reduction

    NASA Astrophysics Data System (ADS)

    Zhao, Xiancong; Li, Hongxu; Wang, Lei; Zhang, Lifeng

    Recovery of Fe from red mud has been studied using statistically designed experiments. The effects of three factors, namely: reduction temperature, reduction time and proportion of additive on recovery of Fe have been investigated. Experiments have been carried out using orthogonal central composite design and factorial design methods. A model has been obtained through variance analysis at 92.5% confidence level.

  16. Multiple imputation methods for bivariate outcomes in cluster randomised trials.

    PubMed

    DiazOrdaz, K; Kenward, M G; Gomes, M; Grieve, R

    2016-09-10

    Missing observations are common in cluster randomised trials. The problem is exacerbated when modelling bivariate outcomes jointly, as the proportion of complete cases is often considerably smaller than the proportion having either of the outcomes fully observed. Approaches taken to handling such missing data include the following: complete case analysis, single-level multiple imputation that ignores the clustering, multiple imputation with a fixed effect for each cluster and multilevel multiple imputation. We contrasted the alternative approaches to handling missing data in a cost-effectiveness analysis that uses data from a cluster randomised trial to evaluate an exercise intervention for care home residents. We then conducted a simulation study to assess the performance of these approaches on bivariate continuous outcomes, in terms of confidence interval coverage and empirical bias in the estimated treatment effects. Missing-at-random clustered data scenarios were simulated following a full-factorial design. Across all the missing data mechanisms considered, the multiple imputation methods provided estimators with negligible bias, while complete case analysis resulted in biased treatment effect estimates in scenarios where the randomised treatment arm was associated with missingness. Confidence interval coverage was generally in excess of nominal levels (up to 99.8%) following fixed-effects multiple imputation and too low following single-level multiple imputation. Multilevel multiple imputation led to coverage levels of approximately 95% throughout. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  17. Statistical Method Based on Confidence and Prediction Regions for Analysis of Volatile Organic Compounds in Human Breath Gas

    NASA Astrophysics Data System (ADS)

    Wimmer, G.

    2008-01-01

    In this paper we introduce two confidence and two prediction regions for statistical characterization of concentration measurements of product ions in order to discriminate various groups of persons for prospective better detection of primary lung cancer. Two MATLAB algorithms have been created for more adequate description of concentration measurements of volatile organic compounds in human breath gas for potential detection of primary lung cancer and for evaluation of the appropriate confidence and prediction regions.

  18. Nationwide forestry applications program. Ten-Ecosystem Study (TES) site 8, Grays Harbor County, Washington

    NASA Technical Reports Server (NTRS)

    Prill, J. C. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Level 2 forest features (softwood, hardwood, clear-cut, and water) can be classified with an overall accuracy of 71.6 percent plus or minus 6.7 percent at the 90 percent confidence level for the particular data and conditions existing at the time of the study. Signatures derived from training fields taken from only 10 percent of the site are not sufficient to adequately classify the site. The level 3 softwood age group classification appears reasonable, although no statistical evaluation was performed.

  19. Assessing Evidence-Based Practice Knowledge, Attitudes, Access and Confidence Among Dental Hygiene Educators.

    PubMed

    Stanley, Jennifer L; Hanson, Carrie L; Van Ness, Christopher J; Holt, Lorie

    2015-10-01

    To assess U.S. dental hygiene educators' evidence-based practice (EBP) knowledge, attitude, access and confidence and determine whether a correlation exists between assessment scores and level of education, length teaching and teaching setting (didactic, clinical or both). A cross-sectional survey was conducted with a sample of dental hygiene faculty from all 334 U.S. dental hygiene schools. ANOVA and Pearson correlation coefficient statistical analysis were utilized to investigate relationships between demographic variables and application of evidence-based principles of patient care. This study involved a non-probability sample (n=124), since the total faculty among all U.S. dental hygiene schools was not determined. Analysis demonstrated a positive correlation between EBP knowledge, access and confidence scores indicating that as knowledge scores increased, so did confidence and access scores (r=0.313, p<0.01 and r=0.189, p<0.05, respectively). Study findings also revealed that faculty who held advanced educational degrees scored significantly higher in EBP knowledge (F3,120=2.81, p<0.04) and confidence (F3,120=7.26, p<0.00). This study suggests the level of EBP knowledge, attitude, access and confidence increases with additional education. Therefore, more EBP training may be necessary for faculty who do not possess advanced education. Results of the study indicate that further incorporation of EBP into dental hygiene curricula may occur as dental hygiene educators' knowledge of EBP increases, which in turn could enhance students' acquisition of EBP skills and their application of EBP principles toward patient care. Copyright © 2015 The American Dental Hygienists’ Association.

  20. Statistical inference for remote sensing-based estimates of net deforestation

    Treesearch

    Ronald E. McRoberts; Brian F. Walters

    2012-01-01

    Statistical inference requires expression of an estimate in probabilistic terms, usually in the form of a confidence interval. An approach to constructing confidence intervals for remote sensing-based estimates of net deforestation is illustrated. The approach is based on post-classification methods using two independent forest/non-forest classifications because...

  1. Risk-based Methodology for Validation of Pharmaceutical Batch Processes.

    PubMed

    Wiles, Frederick

    2013-01-01

    In January 2011, the U.S. Food and Drug Administration published new process validation guidance for pharmaceutical processes. The new guidance debunks the long-held industry notion that three consecutive validation batches or runs are all that are required to demonstrate that a process is operating in a validated state. Instead, the new guidance now emphasizes that the level of monitoring and testing performed during process performance qualification (PPQ) studies must be sufficient to demonstrate statistical confidence both within and between batches. In some cases, three qualification runs may not be enough. Nearly two years after the guidance was first published, little has been written defining a statistical methodology for determining the number of samples and qualification runs required to satisfy Stage 2 requirements of the new guidance. This article proposes using a combination of risk assessment, control charting, and capability statistics to define the monitoring and testing scheme required to show that a pharmaceutical batch process is operating in a validated state. In this methodology, an assessment of process risk is performed through application of a process failure mode, effects, and criticality analysis (PFMECA). The output of PFMECA is used to select appropriate levels of statistical confidence and coverage which, in turn, are used in capability calculations to determine when significant Stage 2 (PPQ) milestones have been met. The achievement of Stage 2 milestones signals the release of batches for commercial distribution and the reduction of monitoring and testing to commercial production levels. Individuals, moving range, and range/sigma charts are used in conjunction with capability statistics to demonstrate that the commercial process is operating in a state of statistical control. The new process validation guidance published by the U.S. Food and Drug Administration in January of 2011 indicates that the number of process validation batches or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.

  2. Constructing Confidence Intervals for Reliability Coefficients Using Central and Noncentral Distributions.

    ERIC Educational Resources Information Center

    Weber, Deborah A.

    Greater understanding and use of confidence intervals is central to changes in statistical practice (G. Cumming and S. Finch, 2001). Reliability coefficients and confidence intervals for reliability coefficients can be computed using a variety of methods. Estimating confidence intervals includes both central and noncentral distribution approaches.…

  3. Brain fingerprinting classification concealed information test detects US Navy military medical information with P300

    PubMed Central

    Farwell, Lawrence A.; Richardson, Drew C.; Richardson, Graham M.; Furedy, John J.

    2014-01-01

    A classification concealed information test (CIT) used the “brain fingerprinting” method of applying P300 event-related potential (ERP) in detecting information that is (1) acquired in real life and (2) unique to US Navy experts in military medicine. Military medicine experts and non-experts were asked to push buttons in response to three types of text stimuli. Targets contain known information relevant to military medicine, are identified to subjects as relevant, and require pushing one button. Subjects are told to push another button to all other stimuli. Probes contain concealed information relevant to military medicine, and are not identified to subjects. Irrelevants contain equally plausible, but incorrect/irrelevant information. Error rate was 0%. Median and mean statistical confidences for individual determinations were 99.9% with no indeterminates (results lacking sufficiently high statistical confidence to be classified). We compared error rate and statistical confidence for determinations of both information present and information absent produced by classification CIT (Is a probe ERP more similar to a target or to an irrelevant ERP?) vs. comparison CIT (Does a probe produce a larger ERP than an irrelevant?) using P300 plus the late negative component (LNP; together, P300-MERMER). Comparison CIT produced a significantly higher error rate (20%) and lower statistical confidences: mean 67%; information-absent mean was 28.9%, less than chance (50%). We compared analysis using P300 alone with the P300 + LNP. P300 alone produced the same 0% error rate but significantly lower statistical confidences. These findings add to the evidence that the brain fingerprinting methods as described here provide sufficient conditions to produce less than 1% error rate and greater than 95% median statistical confidence in a CIT on information obtained in the course of real life that is characteristic of individuals with specific training, expertise, or organizational affiliation. PMID:25565941

  4. Sequence of eruptive events in the Vesuvio area recorded in shallow-water Ionian Sea sediments

    NASA Astrophysics Data System (ADS)

    Taricco, C.; Alessio, S.; Vivaldo, G.

    2008-01-01

    The dating of the cores we drilled from the Gallipoli terrace in the Gulf of Taranto (Ionian Sea), previously obtained by tephroanalysis, is checked by applying a method to objectively recognize volcanic events. This automatic statistical procedure allows identifying pulse-like features in a series and evaluating quantitatively the confidence level at which the significant peaks are detected. We applied it to the 2000-years-long pyroxenes series of the GT89-3 core, on which the dating is based. The method confirms the dating previously performed by detecting at a high confidence level the peaks originally used and indicates a few possible undocumented eruptions. Moreover, a spectral analysis, focussed on the long-term variability of the pyroxenes series and performed by several advanced methods, reveals that the volcanic pulses are superimposed to a millennial trend and a 400 years oscillation.

  5. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.

    PubMed

    Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator.

  6. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number

    PubMed Central

    Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470

  7. The Relationship between Zinc Levels and Autism: A Systematic Review and Meta-analysis.

    PubMed

    Babaknejad, Nasim; Sayehmiri, Fatemeh; Sayehmiri, Kourosh; Mohamadkhani, Ashraf; Bahrami, Somaye

    2016-01-01

    Autism is a complex behaviorally defined disorder.There is a relationship between zinc (Zn) levels in autistic patients and development of pathogenesis, but the conclusion is not permanent. The present study conducted to estimate this probability using meta-analysis method. In this study, Fixed Effect Model, twelve articles published from 1978 to 2012 were selected by searching Google scholar, PubMed, ISI Web of Science, and Scopus and information were analyzed. I² statistics were calculated to examine heterogeneity. The information was analyzed using R and STATA Ver. 12.2. There was no significant statistical difference between hair, nail, and teeth Zn levels between controls and autistic patients: -0.471 [95% confidence interval (95% CI): -1.172 to 0.231]. There was significant statistical difference between plasma Zn concentration and autistic patients besides healthy controls: -0.253 (95% CI: 0.498 to -0.007). Using a Random Effect Model, the overall Integration of data from the two groups was -0.414 (95% CI: -0.878 to -0.051). Based on sensitivity analysis, zinc supplements can be used for the nutritional therapy for autistic patients.

  8. A Novel Analysis Of The Connection Between Indian Monsoon Rainfall And Solar Activity

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, S.; Narasimha, R.

    2005-12-01

    The existence of possible correlations between the solar cycle period as extracted from the yearly means of sunspot numbers and any periodicities that may be present in the Indian monsoon rainfall has been addressed using wavelet analysis. The wavelet transform coefficient maps of sunspot-number time series and those of the homogeneous Indian monsoon rainfall annual time series data reveal striking similarities, especially around the 11-year period. A novel method to analyse and quantify this similarity devising statistical schemes is suggested in this paper. The wavelet transform coefficient maxima at the 11-year period for the sunspot numbers and the monsoon rainfall have each been modelled as a point process in time and a statistical scheme for identifying a trend or dependence between the two processes has been devised. A regression analysis of parameters in these processes reveals a nearly linear trend with small but systematic deviations from the regressed line. Suitable function models for these deviations have been obtained through an unconstrained error minimisation scheme. These models provide an excellent fit to the time series of the given wavelet transform coefficient maxima obtained from actual data. Statistical significance tests on these deviations suggest with 99% confidence that the deviations are sample fluctuations obtained from normal distributions. In fact our earlier studies (see, Bhattacharyya and Narasimha, 2005, Geophys. Res. Lett., Vol. 32, No. 5) revealed that average rainfall is higher during periods of greater solar activity for all cases, at confidence levels varying from 75% to 99%, being 95% or greater in 3 out of 7 of them. Analysis using standard wavelet techniques reveals higher power in the 8--16 y band during the higher solar activity period, in 6 of the 7 rainfall time series, at confidence levels exceeding 99.99%. Furthermore, a comparison between the wavelet cross spectra of solar activity with rainfall and noise (including those simulating the rainfall spectrum and probability distribution) revealed that over the two test-periods respectively of high and low solar activity, the average cross power of the solar activity index with rainfall exceeds that with the noise at z-test confidence levels exceeding 99.99% over period-bands covering the 11.6 y sunspot cycle (see, Bhattacharyya and Narasimha, SORCE 2005 14-16th September, at Durango, Colorado USA). These results provide strong evidence for connections between Indian rainfall and solar activity. The present study reveals in addition the presence of subharmonics of the solar cycle period in the monsoon rainfall time series together with information on their phase relationships.

  9. Alternate methods for FAAT S-curve generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaufman, A.M.

    The FAAT (Foreign Asset Assessment Team) assessment methodology attempts to derive a probability of effect as a function of incident field strength. The probability of effect is the likelihood that the stress put on a system exceeds its strength. In the FAAT methodology, both the stress and strength are random variables whose statistical properties are estimated by experts. Each random variable has two components of uncertainty: systematic and random. The systematic uncertainty drives the confidence bounds in the FAAT assessment. Its variance can be reduced by improved information. The variance of the random uncertainty is not reducible. The FAAT methodologymore » uses an assessment code called ARES to generate probability of effect curves (S-curves) at various confidence levels. ARES assumes log normal distributions for all random variables. The S-curves themselves are log normal cumulants associated with the random portion of the uncertainty. The placement of the S-curves depends on confidence bounds. The systematic uncertainty in both stress and strength is usually described by a mode and an upper and lower variance. Such a description is not consistent with the log normal assumption of ARES and an unsatisfactory work around solution is used to obtain the required placement of the S-curves at each confidence level. We have looked into this situation and have found that significant errors are introduced by this work around. These errors are at least several dB-W/cm{sup 2} at all confidence levels, but they are especially bad in the estimate of the median. In this paper, we suggest two alternate solutions for the placement of S-curves. To compare these calculational methods, we have tabulated the common combinations of upper and lower variances and generated the relevant S-curves offsets from the mode difference of stress and strength.« less

  10. Serum uric acid and cancer mortality and incidence: a systematic review and meta-analysis.

    PubMed

    Dovell, Frances; Boffetta, Paolo

    2018-07-01

    Elevated serum uric acid (SUA) is a marker of chronic inflammation and has been suggested to be associated with increased risk of cancer, but its antioxidant capacity would justify an anticancer effect. Previous meta-analyses did not include all available results. We conducted a systematic review of prospective studies on SUA level and risk of all cancers and specific cancers, a conducted a meta-analysis based on random-effects models for high versus low SUA level as well as for an increase in 1 mg/dl SUA. The relative risk of all cancers for high versus low SUA level was 1.11 (95% confidence interval: 0.94-1.27; 11 risk estimates); that for a mg/dl increase in SUA level was 1.03 (95% confidence interval: 0.99-1.07). Similar results were obtained for lung cancer (six risk estimates) and colon cancer (four risk estimates). Results for other cancers were sparse. Elevated SUA levels appear to be associated with a modest increase in overall cancer risk, although the combined risk estimate did not reach the formal level of statistical significance. Results for specific cancers were limited and mainly negative.

  11. Consistent Tolerance Bounds for Statistical Distributions

    NASA Technical Reports Server (NTRS)

    Mezzacappa, M. A.

    1983-01-01

    Assumption that sample comes from population with particular distribution is made with confidence C if data lie between certain bounds. These "confidence bounds" depend on C and assumption about distribution of sampling errors around regression line. Graphical test criteria using tolerance bounds are applied in industry where statistical analysis influences product development and use. Applied to evaluate equipment life.

  12. Data on electrical energy conservation using high efficiency motors for the confidence bounds using statistical techniques.

    PubMed

    Shaikh, Muhammad Mujtaba; Memon, Abdul Jabbar; Hussain, Manzoor

    2016-09-01

    In this article, we describe details of the data used in the research paper "Confidence bounds for energy conservation in electric motors: An economical solution using statistical techniques" [1]. The data presented in this paper is intended to show benefits of high efficiency electric motors over the standard efficiency motors of similar rating in the industrial sector of Pakistan. We explain how the data was collected and then processed by means of formulas to show cost effectiveness of energy efficient motors in terms of three important parameters: annual energy saving, cost saving and payback periods. This data can be further used to construct confidence bounds for the parameters using statistical techniques as described in [1].

  13. Preparing for fieldwork: Students' perceptions of their readiness to provide evidence-based practice.

    PubMed

    Evenson, Mary E

    2013-01-01

    The purpose of this study was to explore students' perceptions of their confidence to use research evidence to complete a client case analysis assignment in preparation for participation in fieldwork and future practice. A convenience sample of 42 entry-level occupational therapy Masters students, included 41 females and one male, ages 24 to 35. A quasi-experimental pretest-posttest design was used. Students participated in a problem-based learning approach supported by educational technology. Measures included a pre- and post-semester confidence survey, a post-semester satisfaction survey, and an assignment rubric. Based on paired t-tests and Wilcoxin Signed Ranks Tests, statistically significant differences in pre- and post-test scores were noted for all 18 items on the confidence survey (p< 0.001). Significant increases in students' confidence were noted for verbal and written communication of descriptive, assessment, and intervention evidence, along with increased confidence to effectively use assessment evidence. Results suggest that problem-based learning methods were significantly associated with students' perceptions of their confidence to use research evidence to analyze a client case. These results cannot necessarily be generalized due to the limitations of using non-standardized measures with a convenience sample, without a control group, within the context of a single course as part of one academic program curriculum.

  14. Overconfidence across the psychosis continuum: a calibration approach.

    PubMed

    Balzan, Ryan P; Woodward, Todd S; Delfabbro, Paul; Moritz, Steffen

    2016-11-01

    An 'overconfidence in errors' bias has been consistently observed in people with schizophrenia relative to healthy controls, however, the bias is seldom found to be associated with delusional ideation. Using a more precise confidence-accuracy calibration measure of overconfidence, the present study aimed to explore whether the overconfidence bias is greater in people with higher delusional ideation. A sample of 25 participants with schizophrenia and 50 non-clinical controls (25 high- and 25 low-delusion-prone) completed 30 difficult trivia questions (accuracy <75%); 15 'half-scale' items required participants to indicate their level of confidence for accuracy, and the remaining 'confidence-range' items asked participants to provide lower/upper bounds in which they were 80% confident the true answer lay within. There was a trend towards higher overconfidence for half-scale items in the schizophrenia and high-delusion-prone groups, which reached statistical significance for confidence-range items. However, accuracy was particularly low in the two delusional groups and a significant negative correlation between clinical delusional scores and overconfidence was observed for half-scale items within the schizophrenia group. Evidence in support of an association between overconfidence and delusional ideation was therefore mixed. Inflated confidence-accuracy miscalibration for the two delusional groups may be better explained by their greater unawareness of their underperformance, rather than representing genuinely inflated overconfidence in errors.

  15. Building hospital pharmacy practice research capacity in Qatar: a cross-sectional survey of hospital pharmacists.

    PubMed

    Stewart, Derek; Al Hail, Moza; Abdul Rouf, P V; El Kassem, Wessam; Diack, Lesley; Thomas, Binny; Awaisu, Ahmed

    2015-06-01

    There is a need to systematically develop research capacity within pharmacy practice. Hamad Medical Corporation (HMC) is the principal non-profit health care provider in Qatar. Traditionally, pharmacists in Qatar have limited training related to research and lack direct experience of research processes. To determine the interests, experience and confidence of hospital pharmacists employed by HMC, Qatar in relation to research, attitudes towards research, and facilitators and barriers. Hospital pharmacy, Qatar. A cross-sectional survey of all pharmacists (n = 401). Responses were analysed using descriptive and inferential statistics, and principal component analysis (PCA). Interests, experience and confidence in research; attitudes towards research; and facilitators and barriers to participation in research. The response rate was 53.1 % (n = 213). High levels of interest were expressed for all aspects of research, with respondents less experienced and less confident. Summary scores for items of interest were significantly higher than experience and confidence (p < 0.001). PCA identified four components: general attitudes towards research; confidence, motivation and resources; research culture; and support. While respondents were generally positive in response to all items, they were less sure of resources to conduct research, access to training and statistical support. They were also generally unsure of many aspects relating to research culture. Half (50.7 %, n = 108) had either never thought about being involved in research or taken no action. In multivariate binary logistic regression analysis, the significant factors were possessing postgraduate qualifications [odds ratio (OR) 3.48 (95 % CI 1.73-6.99), p < 0.001] and having more positive general attitudes to research [OR 3.24 (95 % CI 1.62-4.67), p = 0.001]. Almost all (89.7 %, n = 172) expressed interest in being involved in research training. HMC pharmacists expressed significantly higher levels of interest in research compared to experience and confidence. While general attitudes towards research were positive, there were some barriers relating to support (e.g. administration) and research culture. Positive attitudes towards research and possessing postgraduate qualifications were significant in relation to readiness to participate in research and research training. Findings are of key relevance when considering the aims of research capacity building of encouraging research, improving skills and identifying skills gaps.

  16. Confidence intervals for single-case effect size measures based on randomization test inversion.

    PubMed

    Michiels, Bart; Heyvaert, Mieke; Meulders, Ann; Onghena, Patrick

    2017-02-01

    In the current paper, we present a method to construct nonparametric confidence intervals (CIs) for single-case effect size measures in the context of various single-case designs. We use the relationship between a two-sided statistical hypothesis test at significance level α and a 100 (1 - α) % two-sided CI to construct CIs for any effect size measure θ that contain all point null hypothesis θ values that cannot be rejected by the hypothesis test at significance level α. This method of hypothesis test inversion (HTI) can be employed using a randomization test as the statistical hypothesis test in order to construct a nonparametric CI for θ. We will refer to this procedure as randomization test inversion (RTI). We illustrate RTI in a situation in which θ is the unstandardized and the standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate how RTI can be extended to other types of single-case designs. Finally, we discuss a few challenges for RTI as well as possibilities when using the method with other effect size measures, such as rank-based nonoverlap indices. Supplementary to this paper, we provide easy-to-use R code, which allows the user to construct nonparametric CIs according to the proposed method.

  17. Testing for clustering at many ranges inflates family-wise error rate (FWE).

    PubMed

    Loop, Matthew Shane; McClure, Leslie A

    2015-01-15

    Testing for clustering at multiple ranges within a single dataset is a common practice in spatial epidemiology. It is not documented whether this approach has an impact on the type 1 error rate. We estimated the family-wise error rate (FWE) for the difference in Ripley's K functions test, when testing at an increasing number of ranges at an alpha-level of 0.05. Case and control locations were generated from a Cox process on a square area the size of the continental US (≈3,000,000 mi2). Two thousand Monte Carlo replicates were used to estimate the FWE with 95% confidence intervals when testing for clustering at one range, as well as 10, 50, and 100 equidistant ranges. The estimated FWE and 95% confidence intervals when testing 10, 50, and 100 ranges were 0.22 (0.20 - 0.24), 0.34 (0.31 - 0.36), and 0.36 (0.34 - 0.38), respectively. Testing for clustering at multiple ranges within a single dataset inflated the FWE above the nominal level of 0.05. Investigators should construct simultaneous critical envelopes (available in spatstat package in R), or use a test statistic that integrates the test statistics from each range, as suggested by the creators of the difference in Ripley's K functions test.

  18. Surveillance metrics sensitivity study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamada, Michael S.; Bierbaum, Rene Lynn; Robertson, Alix A.

    2011-09-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculationsmore » and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.« less

  19. Surveillance Metrics Sensitivity Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bierbaum, R; Hamada, M; Robertson, A

    2011-11-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculationsmore » and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.« less

  20. Brain networks for confidence weighting and hierarchical inference during probabilistic learning.

    PubMed

    Meyniel, Florent; Dehaene, Stanislas

    2017-05-09

    Learning is difficult when the world fluctuates randomly and ceaselessly. Classical learning algorithms, such as the delta rule with constant learning rate, are not optimal. Mathematically, the optimal learning rule requires weighting prior knowledge and incoming evidence according to their respective reliabilities. This "confidence weighting" implies the maintenance of an accurate estimate of the reliability of what has been learned. Here, using fMRI and an ideal-observer analysis, we demonstrate that the brain's learning algorithm relies on confidence weighting. While in the fMRI scanner, human adults attempted to learn the transition probabilities underlying an auditory or visual sequence, and reported their confidence in those estimates. They knew that these transition probabilities could change simultaneously at unpredicted moments, and therefore that the learning problem was inherently hierarchical. Subjective confidence reports tightly followed the predictions derived from the ideal observer. In particular, subjects managed to attach distinct levels of confidence to each learned transition probability, as required by Bayes-optimal inference. Distinct brain areas tracked the likelihood of new observations given current predictions, and the confidence in those predictions. Both signals were combined in the right inferior frontal gyrus, where they operated in agreement with the confidence-weighting model. This brain region also presented signatures of a hierarchical process that disentangles distinct sources of uncertainty. Together, our results provide evidence that the sense of confidence is an essential ingredient of probabilistic learning in the human brain, and that the right inferior frontal gyrus hosts a confidence-based statistical learning algorithm for auditory and visual sequences.

  1. Brain networks for confidence weighting and hierarchical inference during probabilistic learning

    PubMed Central

    Meyniel, Florent; Dehaene, Stanislas

    2017-01-01

    Learning is difficult when the world fluctuates randomly and ceaselessly. Classical learning algorithms, such as the delta rule with constant learning rate, are not optimal. Mathematically, the optimal learning rule requires weighting prior knowledge and incoming evidence according to their respective reliabilities. This “confidence weighting” implies the maintenance of an accurate estimate of the reliability of what has been learned. Here, using fMRI and an ideal-observer analysis, we demonstrate that the brain’s learning algorithm relies on confidence weighting. While in the fMRI scanner, human adults attempted to learn the transition probabilities underlying an auditory or visual sequence, and reported their confidence in those estimates. They knew that these transition probabilities could change simultaneously at unpredicted moments, and therefore that the learning problem was inherently hierarchical. Subjective confidence reports tightly followed the predictions derived from the ideal observer. In particular, subjects managed to attach distinct levels of confidence to each learned transition probability, as required by Bayes-optimal inference. Distinct brain areas tracked the likelihood of new observations given current predictions, and the confidence in those predictions. Both signals were combined in the right inferior frontal gyrus, where they operated in agreement with the confidence-weighting model. This brain region also presented signatures of a hierarchical process that disentangles distinct sources of uncertainty. Together, our results provide evidence that the sense of confidence is an essential ingredient of probabilistic learning in the human brain, and that the right inferior frontal gyrus hosts a confidence-based statistical learning algorithm for auditory and visual sequences. PMID:28439014

  2. Upper limits for the photoproduction cross section for the Φ--(1860) pentaquark state off the deuteron

    NASA Astrophysics Data System (ADS)

    Egiyan, H.; Langheinrich, J.; Gothe, R. W.; Graham, L.; Holtrop, M.; Lu, H.; Mattione, P.; Mutchler, G.; Park, K.; Smith, E. S.; Stepanyan, S.; Zhao, Z. W.; Adhikari, K. P.; Aghasyan, M.; Anghinolfi, M.; Baghdasaryan, H.; Ball, J.; Baltzell, N. A.; Battaglieri, M.; Bedlinskiy, I.; Bennett, R. P.; Biselli, A. S.; Bookwalter, C.; Branford, D.; Briscoe, W. J.; Brooks, W. K.; Burkert, V. D.; Carman, D. S.; Celentano, A.; Chandavar, S.; Contalbrigo, M.; D'Angelo, A.; Daniel, A.; Dashyan, N.; de Vita, R.; de Sanctis, E.; Deur, A.; Dey, B.; Dickson, R.; Djalali, C.; Doughty, D.; Dupre, R.; El Alaoui, A.; El Fassi, L.; Eugenio, P.; Fedotov, G.; Fegan, S.; Fradi, A.; Gabrielyan, M. Y.; Gevorgyan, N.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Goetz, J. T.; Gohn, W.; Golovatch, E.; Griffioen, K. A.; Guidal, M.; Guler, N.; Guo, L.; Gyurjyan, V.; Hafidi, K.; Hakobyan, H.; Hanretty, C.; Heddle, D.; Hicks, K.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Jo, H. S.; Joo, K.; Khetarpal, P.; Kim, A.; Kim, W.; Klein, A.; Klein, F. J.; Kubarovsky, V.; Kuleshov, S. V.; Livingston, K.; MacGregor, I. J. D.; Mao, Y.; Mayer, M.; McKinnon, B.; Mokeev, V.; Munevar, E.; Nadel-Turonski, P.; Ni, A.; Niculescu, G.; Ostrovidov, A. I.; Paolone, M.; Pappalardo, L.; Paremuzyan, R.; Park, S.; Pasyuk, E.; Anefalos Pereira, S.; Phelps, E.; Pogorelko, O.; Pozdniakov, S.; Price, J. W.; Procureur, S.; Protopopescu, D.; Raue, B. A.; Ricco, G.; Rimal, D.; Ripani, M.; Ritchie, B. G.; Rosner, G.; Rossi, P.; Sabatié, F.; Saini, M. S.; Salgado, C.; Schott, D.; Schumacher, R. A.; Seder, E.; Seraydaryan, H.; Sharabian, Y. G.; Smith, G. D.; Sober, D. I.; Stepanyan, S. S.; Strauch, S.; Taiuti, M.; Tang, W.; Taylor, C. E.; Tedeschi, D. J.; Ungaro, M.; Voutier, E.; Watts, D. P.; Weinstein, L. B.; Weygand, D. P.; Wood, M. H.; Zachariou, N.; Zana, L.; Zhao, B.

    2012-01-01

    We searched for the Φ--(1860) pentaquark in the photoproduction process off the deuteron in the Ξ-π--decay channel using CLAS. The invariant-mass spectrum of the Ξ-π- system does not indicate any statistically significant enhancement near the reported mass M=1.860 GeV. The statistical analysis of the sideband-subtracted mass spectrum yields a 90%-confidence-level upper limit of 0.7 nb for the photoproduction cross section of Φ--(1860) with a consecutive decay into Ξ-π- in the photon-energy range 4.5GeV

  3. A Statistical Framework for Protein Quantitation in Bottom-Up MS-Based Proteomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas

    2009-08-15

    Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and associated confidence measures. Challenges include the presence of low quality or incorrectly identified peptides and informative missingness. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model that carefully accounts for informative missingness in peak intensities and allows unbiased, model-based, protein-level estimation and inference. The model is applicable to both label-based and label-free quantitation experiments. We also provide automated, model-based, algorithms for filtering of proteins and peptides as well as imputation of missing values. Two LC/MS datasets are used to illustrate themore » methods. In simulation studies, our methods are shown to achieve substantially more discoveries than standard alternatives. Availability: The software has been made available in the opensource proteomics platform DAnTE (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu Supplementary information: Supplementary data are available at Bioinformatics online.« less

  4. The case for increasing the statistical power of eddy covariance ecosystem studies: why, where and how?

    PubMed

    Hill, Timothy; Chocholek, Melanie; Clement, Robert

    2017-06-01

    Eddy covariance (EC) continues to provide invaluable insights into the dynamics of Earth's surface processes. However, despite its many strengths, spatial replication of EC at the ecosystem scale is rare. High equipment costs are likely to be partially responsible. This contributes to the low sampling, and even lower replication, of ecoregions in Africa, Oceania (excluding Australia) and South America. The level of replication matters as it directly affects statistical power. While the ergodicity of turbulence and temporal replication allow an EC tower to provide statistically robust flux estimates for its footprint, these principles do not extend to larger ecosystem scales. Despite the challenge of spatially replicating EC, it is clearly of interest to be able to use EC to provide statistically robust flux estimates for larger areas. We ask: How much spatial replication of EC is required for statistical confidence in our flux estimates of an ecosystem? We provide the reader with tools to estimate the number of EC towers needed to achieve a given statistical power. We show that for a typical ecosystem, around four EC towers are needed to have 95% statistical confidence that the annual flux of an ecosystem is nonzero. Furthermore, if the true flux is small relative to instrument noise and spatial variability, the number of towers needed can rise dramatically. We discuss approaches for improving statistical power and describe one solution: an inexpensive EC system that could help by making spatial replication more affordable. However, we note that diverting limited resources from other key measurements in order to allow spatial replication may not be optimal, and a balance needs to be struck. While individual EC towers are well suited to providing fluxes from the flux footprint, we emphasize that spatial replication is essential for statistically robust fluxes if a wider ecosystem is being studied. © 2016 The Authors Global Change Biology Published by John Wiley & Sons Ltd.

  5. Psychosocial and demographic variables associated with consumer intention to purchase sustainably produced foods as defined by the Midwest Food Alliance.

    PubMed

    Robinson, Ramona; Smith, Chery

    2002-01-01

    To examine psychosocial and demographic variables associated with consumer intention to purchase sustainably produced foods using an expanded Theory of Planned Behavior. Consumers were approached at the store entrance and asked to complete a self-administered survey. Three metropolitan Minnesota grocery stores. Participants (n = 550) were adults who shopped at the store: the majority were white, female, and highly educated and earned >or= 50,000 dollars/year. Participation rates averaged 62%. The major domain investigated was consumer support for sustainably produced foods. Demographics, beliefs, attitudes, subjective norm, and self-identity and perceived behavioral control were evaluated as predictors of intention to purchase them. Descriptive statistics, independent t tests, one-way analysis of variance, Pearson product moment correlation coefficients, and stepwise multiple regression analyses (P <.05). Consumers were supportive of sustainably produced foods but not highly confident in their ability to purchase them. Independent predictors of intention to purchase them included attitudes, beliefs, perceived behavioral control, subjective norm, past buying behavior, and marital status. Beliefs, attitudes, and confidence level may influence intention to purchase sustainably produced foods. Nutrition educators could increase consumers' awareness of sustainably produced foods by understanding their beliefs, attitudes, and confidence levels.

  6. The Role of Residential Early Parenting Services in Increasing Parenting Confidence in Mothers with A History of Infertility

    PubMed Central

    Khajehei, Marjan; Finch, Lynette

    2016-01-01

    Background Mothers with a history of infertility may experience parenting difficulties and challenges. This study was conducted to investigate the role of residential early parenting services in increasing parenting confidence in mothers with a history of infertility. Materials and Methods This was a retrospective chart review study using the quantitative data from the clients attending the Karitane Residential Units and Parenting Services (known as Karitane RUs) during 2013. Parenting confidence (using Karitane Parenting Confidence Scale-KPCS), depression, demographics, reproductive and medical history, as well as child’s information were assessed from a sample of 27 mothers who had a history of infertility and who attended the Karitane RUs for support and assistance. The data were analyzed using SPSS version 19. Results More than half of the women (59.3%) reported a relatively low level of parenting confidence on the day of admission. The rate of low parenting confidence, however, dropped to 22.2% after receiving 4-5 days support and training in the Karitane RUs. The mean score of the KPCS increased from 36.9 ± 5.6 before the intervention to 41.1 ± 3.4 after the intervention, indicating an improvement in the parenting confidence of the mothers after attending the Karitane RUs (P<0.0001). No statistically significant association was found between maternal low parenting confidence with parental demographics (including age, country of birth, and employment status), a history of help-seeking, symptoms of depression, as well as child’s information [including gender, age, siblings, diagnosis of gastroesophageal reflux disease (GORD) and use of medication]. Conclusion Having a child after a period of infertility can be a stressful experience for some mothers. This can result in low parenting confidence and affect parent-child attachment. Our findings emphasized on the role of the residential early parenting services in promoting the level of parenting confidence and highlighted the need for early recognition and referral of the mothers with a history of infertility to such centers. PMID:27441050

  7. Validating a biometric authentication system: sample size requirements.

    PubMed

    Dass, Sarat C; Zhu, Yongfang; Jain, Anil K

    2006-12-01

    Authentication systems based on biometric features (e.g., fingerprint impressions, iris scans, human face images, etc.) are increasingly gaining widespread use and popularity. Often, vendors and owners of these commercial biometric systems claim impressive performance that is estimated based on some proprietary data. In such situations, there is a need to independently validate the claimed performance levels. System performance is typically evaluated by collecting biometric templates from n different subjects, and for convenience, acquiring multiple instances of the biometric for each of the n subjects. Very little work has been done in 1) constructing confidence regions based on the ROC curve for validating the claimed performance levels and 2) determining the required number of biometric samples needed to establish confidence regions of prespecified width for the ROC curve. To simplify the analysis that address these two problems, several previous studies have assumed that multiple acquisitions of the biometric entity are statistically independent. This assumption is too restrictive and is generally not valid. We have developed a validation technique based on multivariate copula models for correlated biometric acquisitions. Based on the same model, we also determine the minimum number of samples required to achieve confidence bands of desired width for the ROC curve. We illustrate the estimation of the confidence bands as well as the required number of biometric samples using a fingerprint matching system that is applied on samples collected from a small population.

  8. Integration of multiple biological features yields high confidence human protein interactome.

    PubMed

    Karagoz, Kubra; Sevimoglu, Tuba; Arga, Kazim Yalcin

    2016-08-21

    The biological function of a protein is usually determined by its physical interaction with other proteins. Protein-protein interactions (PPIs) are identified through various experimental methods and are stored in curated databases. The noisiness of the existing PPI data is evident, and it is essential that a more reliable data is generated. Furthermore, the selection of a set of PPIs at different confidence levels might be necessary for many studies. Although different methodologies were introduced to evaluate the confidence scores for binary interactions, a highly reliable, almost complete PPI network of Homo sapiens is not proposed yet. The quality and coverage of human protein interactome need to be improved to be used in various disciplines, especially in biomedicine. In the present work, we propose an unsupervised statistical approach to assign confidence scores to PPIs of H. sapiens. To achieve this goal PPI data from six different databases were collected and a total of 295,288 non-redundant interactions between 15,950 proteins were acquired. The present scoring system included the context information that was assigned to PPIs derived from eight biological attributes. A high confidence network, which included 147,923 binary interactions between 13,213 proteins, had scores greater than the cutoff value of 0.80, for which sensitivity, specificity, and coverage were 94.5%, 80.9%, and 82.8%, respectively. We compared the present scoring method with others for evaluation. Reducing the noise inherent in experimental PPIs via our scoring scheme increased the accuracy significantly. As it was demonstrated through the assessment of process and cancer subnetworks, this study allows researchers to construct and analyze context-specific networks via valid PPI sets and one can easily achieve subnetworks around proteins of interest at a specified confidence level. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. A global compilation of coral sea-level benchmarks: Implications and new challenges

    NASA Astrophysics Data System (ADS)

    Medina-Elizalde, Martín

    2013-01-01

    I present a quality-controlled compilation of sea-level data from U-Th dated corals, encompassing 30 studies of 13 locations around the world. The compilation contains relative sea level (RSL) data from each location based on both conventional and open-system U-Th ages. I have applied a commonly used age quality control criterion based on the initial 234U/238U activity ratios of corals in order to select reliable ages and to reconstruct sea level histories for the last 150,000 yr. This analysis reveals scatter of RSL estimates among coeval coral benchmarks both within individual locations and between locations, particularly during Marine Isotope Stage (MIS) 5a and the glacial inception following the last interglacial. The character of data scatter during these time intervals imply that uncertainties still exist regarding tectonics, glacio-isostacy, U-series dating, and/or coral position. To elucidate robust underlying patterns, with confidence limits, I performed a Monte Carlo-style statistical analysis of the compiled coral data considering appropriate age and sea-level uncertainties. By its nature, such an analysis has the tendency to smooth/obscure millennial-scale (and finer) details that may be important in individual datasets, and favour the major underlying patterns that are supported by all datasets. This statistical analysis is thus functional to illustrate major trends that are statistically robust ('what we know'), trends that are suggested but still are supported by few data ('what we might know, subject to addition of more supporting data and improved corrections'), and which patterns/data are clear outliers ('unlikely to be realistic given the rest of the global data and possibly needing further adjustments'). Prior to the last glacial maximum and with the possible exception of the 130-120 ka period, available coral data generally have insufficient temporal resolution and unexplained scatter, which hinders identification of a well-defined pattern with usefully narrow confidence limits. This analysis thus provides a framework that objectively identifies critical targets for new data collection, improved corrections, and integration of coral data with independent, stratigraphically continuous methods of sea-level reconstruction.

  10. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  11. Confidence interval or p-value?: part 4 of a series on evaluation of scientific publications.

    PubMed

    du Prel, Jean-Baptist; Hommel, Gerhard; Röhrig, Bernd; Blettner, Maria

    2009-05-01

    An understanding of p-values and confidence intervals is necessary for the evaluation of scientific articles. This article will inform the reader of the meaning and interpretation of these two statistical concepts. The uses of these two statistical concepts and the differences between them are discussed on the basis of a selective literature search concerning the methods employed in scientific articles. P-values in scientific studies are used to determine whether a null hypothesis formulated before the performance of the study is to be accepted or rejected. In exploratory studies, p-values enable the recognition of any statistically noteworthy findings. Confidence intervals provide information about a range in which the true value lies with a certain degree of probability, as well as about the direction and strength of the demonstrated effect. This enables conclusions to be drawn about the statistical plausibility and clinical relevance of the study findings. It is often useful for both statistical measures to be reported in scientific articles, because they provide complementary types of information.

  12. Development of welding emission factors for Cr and Cr(VI) with a confidence level.

    PubMed

    Serageldin, Mohamed; Reeves, David W

    2009-05-01

    Knowledge of the emission rate and release characteristics is necessary for estimating pollutant fate and transport. Because emission measurements at a facility's fence line are generally not readily available, environmental agencies in many countries are using emission factors (EFs) to indicate the quantity of certain pollutants released into the atmosphere from operations such as welding. The amount of fumes and metals generated from a welding process is dependent on many parameters, such as electrode composition, voltage, and current. Because test reports on fume generation provide different levels of detail, a common approach was used to give a test report a quality rating on the basis of several highly subjective criteria; however, weighted average EFs generated in this way are not meant to reflect data precision or to be used for a refined risk analysis. The 95% upper confidence limit (UCL) of the unknown population mean was used in this study to account for the uncertainty in the EF test data. Several parametric UCLs were computed and compared for multiple welding EFs associated with several mild, stainless, and alloy steels. Also, several nonparametric statistical methods, including several bootstrap procedures, were used to compute 95% UCLs. For the nonparametric methods, a distribution for calculating the mean, standard deviation, and other statistical parameters for a dataset does not need to be assumed. There were instances when the sample size was small and instances when EFs for an electrode/process combination were not found. Those two points are addressed in this paper. Finally, this paper is an attempt to deal with the uncertainty in the value of a mean EF for an electrode/process combination that is based on test data from several laboratories. Welding EFs developed with a defined level of confidence may be used as input parameters for risk assessment.

  13. Information theoretic partitioning and confidence based weight assignment for multi-classifier decision level fusion in hyperspectral target recognition applications

    NASA Astrophysics Data System (ADS)

    Prasad, S.; Bruce, L. M.

    2007-04-01

    There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we study the efficacy of higher order statistical information (using average mutual information) for a bottom up band grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies.

  14. The Math Problem: Advertising Students' Attitudes toward Statistics

    ERIC Educational Resources Information Center

    Fullerton, Jami A.; Kendrick, Alice

    2013-01-01

    This study used the Students' Attitudes toward Statistics Scale (STATS) to measure attitude toward statistics among a national sample of advertising students. A factor analysis revealed four underlying factors make up the attitude toward statistics construct--"Interest & Future Applicability," "Confidence," "Statistical Tools," and "Initiative."…

  15. Two Different Views on the World Around Us: The World of Uniformity versus Diversity

    PubMed Central

    Nayakankuppam, Dhananjay

    2016-01-01

    We propose that when individuals believe in fixed traits of personality (entity theorists), they are likely to expect a world of “uniformity.” As such, they easily infer a population statistic from a small sample of data with confidence. In contrast, individuals who believe in malleable traits of personality (incremental theorists) are likely to presume a world of “diversity,” such that they “hesitate” to infer a population statistic from a similarly sized sample. In four laboratory experiments, we found that compared to incremental theorists, entity theorists estimated a population mean from a sample with a greater level of confidence (Studies 1a and 1b), expected more homogeneity among the entities within a population (Study 2), and perceived an extreme value to be more indicative of an outlier (Study 3). These results suggest that individuals are likely to use their implicit self-theory orientations (entity theory versus incremental theory) to see a population in general as a constitution either of homogeneous or heterogeneous entities. PMID:27977788

  16. A statistical method for assessing peptide identification confidence in accurate mass and time tag proteomics

    PubMed Central

    Stanley, Jeffrey R.; Adkins, Joshua N.; Slysz, Gordon W.; Monroe, Matthew E.; Purvine, Samuel O.; Karpievitch, Yuliya V.; Anderson, Gordon A.; Smith, Richard D.; Dabney, Alan R.

    2011-01-01

    Current algorithms for quantifying peptide identification confidence in the accurate mass and time (AMT) tag approach assume that the AMT tags themselves have been correctly identified. However, there is uncertainty in the identification of AMT tags, as this is based on matching LC-MS/MS fragmentation spectra to peptide sequences. In this paper, we incorporate confidence measures for the AMT tag identifications into the calculation of probabilities for correct matches to an AMT tag database, resulting in a more accurate overall measure of identification confidence for the AMT tag approach. The method is referred to as Statistical Tools for AMT tag Confidence (STAC). STAC additionally provides a Uniqueness Probability (UP) to help distinguish between multiple matches to an AMT tag and a method to calculate an overall false discovery rate (FDR). STAC is freely available for download as both a command line and a Windows graphical application. PMID:21692516

  17. Using Screencast Videos to Enhance Undergraduate Students' Statistical Reasoning about Confidence Intervals

    ERIC Educational Resources Information Center

    Strazzeri, Kenneth Charles

    2013-01-01

    The purposes of this study were to investigate (a) undergraduate students' reasoning about the concepts of confidence intervals (b) undergraduate students' interactions with "well-designed" screencast videos on sampling distributions and confidence intervals, and (c) how screencast videos improve undergraduate students' reasoning ability…

  18. Validation of Statistical Sampling Algorithms in Visual Sample Plan (VSP): Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuffer, Lisa L; Sego, Landon H.; Wilson, John E.

    2009-02-18

    The U.S. Department of Homeland Security, Office of Technology Development (OTD) contracted with a set of U.S. Department of Energy national laboratories, including the Pacific Northwest National Laboratory (PNNL), to write a Remediation Guidance for Major Airports After a Chemical Attack. The report identifies key activities and issues that should be considered by a typical major airport following an incident involving release of a toxic chemical agent. Four experimental tasks were identified that would require further research in order to supplement the Remediation Guidance. One of the tasks, Task 4, OTD Chemical Remediation Statistical Sampling Design Validation, dealt with statisticalmore » sampling algorithm validation. This report documents the results of the sampling design validation conducted for Task 4. In 2005, the Government Accountability Office (GAO) performed a review of the past U.S. responses to Anthrax terrorist cases. Part of the motivation for this PNNL report was a major GAO finding that there was a lack of validated sampling strategies in the U.S. response to Anthrax cases. The report (GAO 2005) recommended that probability-based methods be used for sampling design in order to address confidence in the results, particularly when all sample results showed no remaining contamination. The GAO also expressed a desire that the methods be validated, which is the main purpose of this PNNL report. The objective of this study was to validate probability-based statistical sampling designs and the algorithms pertinent to within-building sampling that allow the user to prescribe or evaluate confidence levels of conclusions based on data collected as guided by the statistical sampling designs. Specifically, the designs found in the Visual Sample Plan (VSP) software were evaluated. VSP was used to calculate the number of samples and the sample location for a variety of sampling plans applied to an actual release site. Most of the sampling designs validated are probability based, meaning samples are located randomly (or on a randomly placed grid) so no bias enters into the placement of samples, and the number of samples is calculated such that IF the amount and spatial extent of contamination exceeds levels of concern, at least one of the samples would be taken from a contaminated area, at least X% of the time. Hence, "validation" of the statistical sampling algorithms is defined herein to mean ensuring that the "X%" (confidence) is actually met.« less

  19. The thresholds for statistical and clinical significance – a five-step procedure for evaluation of intervention effects in randomised clinical trials

    PubMed Central

    2014-01-01

    Background Thresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore, assessment of intervention effects in randomised clinical trials deserves more rigour in order to become more valid. Methods Several methodologies for assessing the statistical and clinical significance of intervention effects in randomised clinical trials were considered. Balancing simplicity and comprehensiveness, a simple five-step procedure was developed. Results For a more valid assessment of results from a randomised clinical trial we propose the following five-steps: (1) report the confidence intervals and the exact P-values; (2) report Bayes factor for the primary outcome, being the ratio of the probability that a given trial result is compatible with a ‘null’ effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance threshold if the trial is stopped early or if interim analyses have been conducted; (4) adjust the confidence intervals and the P-values for multiplicity due to number of outcome comparisons; and (5) assess clinical significance of the trial results. Conclusions If the proposed five-step procedure is followed, this may increase the validity of assessments of intervention effects in randomised clinical trials. PMID:24588900

  20. Preserved Statistical Learning of Tonal and Linguistic Material in Congenital Amusia

    PubMed Central

    Omigie, Diana; Stewart, Lauren

    2011-01-01

    Congenital amusia is a lifelong disorder whereby individuals have pervasive difficulties in perceiving and producing music. In contrast, typical individuals display a sophisticated understanding of musical structure, even in the absence of musical training. Previous research has shown that they acquire this knowledge implicitly, through exposure to music's statistical regularities. The present study tested the hypothesis that congenital amusia may result from a failure to internalize statistical regularities – specifically, lower-order transitional probabilities. To explore the specificity of any potential deficits to the musical domain, learning was examined with both tonal and linguistic material. Participants were exposed to structured tonal and linguistic sequences and, in a subsequent test phase, were required to identify items which had been heard in the exposure phase, as distinct from foils comprising elements that had been present during exposure, but presented in a different temporal order. Amusic and control individuals showed comparable learning, for both tonal and linguistic material, even when the tonal stream included pitch intervals around one semitone. However analysis of binary confidence ratings revealed that amusic individuals have less confidence in their abilities and that their performance in learning tasks may not be contingent on explicit knowledge formation or level of awareness to the degree shown in typical individuals. The current findings suggest that the difficulties amusic individuals have with real-world music cannot be accounted for by an inability to internalize lower-order statistical regularities but may arise from other factors. PMID:21779263

  1. Is it possible to identify a risk factor condition of hypocalcemia in patients candidates to thyroidectomy for benign disease?

    PubMed

    Del Rio, Paolo; Iapichino, Gioacchino; De Simone, Belinda; Bezer, Lamia; Arcuri, MariaFrancesca; Sianesi, Mario

    2010-01-01

    Hypocalcaemia is the most frequent complication after total thyroidectomy. The incidence of postoperative hypocalcaemia is reported with different percentages in literature. We report 227 patients undergoing surgery for benign thyroid disease. After obtaining patient's informed consent, we collected and analyzed prospectively the following data: calcium serum levels pre and postoperative in the first 24 hours after surgery according to sex, age, duration of surgery, number of parathyroids identified by the surgeon, surgical technique (open and minimally invasive video-assisted thyroidectomy, i.e., MIVAT). We have considered cases treated consecutively from the same two experienced endocrine surgeons. Hypocalcaemia is assumed when the value of serum calcium is below 7.5 mg/dL. Pre-and post-operative mean serum calcium, with confidence intervals at 99% divided by sex, revealed a statistically significant difference in the ANOVA test (p < 0.01) in terms of incidence. Female sex has higher incidence of hypocalcemia. The evaluation of the mean serum calcium in pre-and post-operative period, with confidence intervals at 95%, depending on the number of identified parathyroid glands by surgeon, showed that the result is not correlated with values of postoperative serum calcium. Age and pre-and postoperative serum calcium values with confidence intervals at 99% based on sex of patients, didn't show statistically significant differences. We haven't highlighted a significant difference in postoperative hypocalcemia in patients treated with conventional thyroidectomy versus MIVAT. A difference in pre- and postoperative mean serum calcium occurs in all patients surgically treated. The only statistical meaningful risk factor for hypocalcemia has been the female sex.

  2. Clinical skills development in student-run free clinic volunteers: a multi-trait, multi-measure study.

    PubMed

    Nakamura, Mio; Altshuler, David; Chadwell, Margit; Binienda, Juliann

    2014-12-12

    At Wayne State University School of Medicine (WSU SOM), the Robert R. Frank Student Run Free Clinic (SRFC) is one place preclinical students can gain clinical experience. There have been no published studies to date measuring the impact of student-run free clinic (SRFC) volunteerism on clinical skills development in preclinical medical students. Surveys were given to first year medical students at WSU SOM at the beginning and end of Year 1 to assess perception of clinical skills, including self-confidence, self-reflection, and professionalism. Scores of the Year 1 Objective Structured Clinical Exam (OSCE) were compared between SRFC volunteers and non-volunteers. There were a total of 206 (68.2%) and 80 (26.5%) survey responses at the beginning and end of Year 1, respectively. Of the 80 students, 31 (38.7%) volunteered at SRFC during Year 1. Statistically significant differences were found between time points in self-confidence (p < 0.001) in both groups. When looking at self-confidence in skills pertaining to SRFC, the difference between groups was statistically significant (p = 0.032) at both time points. A total of 302 students participated in the Year 1 OSCE, 27 (9%) of which were SRFC volunteers. No statistically significant differences were found between groups for mean score (p = 0.888) and established level of rapport (p = 0.394). While this study indicated no significant differences in clinical skills in students who volunteer at the SRFC, it is a first step in attempting to measure clinical skill development outside of the structured medical school setting. The findings lend themselves to development of research designs, clinical surveys, and future studies to measure the impact of clinical volunteer opportunities on clinical skills development in future physicians.

  3. Upper limits for the photoproduction cross section for the Φ⁻⁻(1860) pentaquark state off the deuteron

    DOE PAGES

    Egiyan, H.; Langheinrich, J.; Gothe, R. W.; ...

    2012-01-30

    We searched for the Φ⁻⁻(1860) pentaquark in the photoproduction process off the deuteron in the Ξ⁻π⁻-decay channel using CLAS. The invariant-mass spectrum of the Ξ⁻π⁻ system does not indicate any statistically significant enhancement near the reported mass M=1.860 GeV. The statistical analysis of the sideband-subtracted mass spectrum yields a 90%-confidence-level upper limit of 0.7 nb for the photoproduction cross section of Φ⁻⁻(1860) with a consecutive decay intoΞ⁻π⁻ in the photon-energy range 4.5GeVγ<5.5GeV.

  4. Statistical Requirements For Pass-Fail Testing Of Contraband Detection Systems

    NASA Astrophysics Data System (ADS)

    Gilliam, David M.

    2011-06-01

    Contraband detection systems for homeland security applications are typically tested for probability of detection (PD) and probability of false alarm (PFA) using pass-fail testing protocols. Test protocols usually require specified values for PD and PFA to be demonstrated at a specified level of statistical confidence CL. Based on a recent more theoretical treatment of this subject [1], this summary reviews the definition of CL and provides formulas and spreadsheet functions for constructing tables of general test requirements and for determining the minimum number of tests required. The formulas and tables in this article may be generally applied to many other applications of pass-fail testing, in addition to testing of contraband detection systems.

  5. Second-hand smoking and carboxyhemoglobin levels in children: a prospective observational study.

    PubMed

    Yee, Branden E; Ahmed, Mohammed I; Brugge, Doug; Farrell, Maureen; Lozada, Gustavo; Idupaganthi, Raghu; Schumann, Roman

    2010-01-01

    To establish baseline noninvasive carboxyhemoglobin (COHb) levels in children and determine the influence of exposure to environmental sources of carbon monoxide (CO), especially environmental tobacco smoke, on such levels. Second-hand smoking may be a risk factor for adverse outcomes following anesthesia and surgery in children (1) and may potentially be preventable. Parents and their children between the ages of 1-12 were enrolled on the day of elective surgery. The preoperative COHb levels of the children were assessed noninvasively using a CO-Oximeter (Radical-7 Rainbow SET Pulse CO-Oximeter; Masimo, Irvine, CA, USA). The parents were asked to complete an environmental air-quality questionnaire. The COHb levels were tabulated and correlated with responses to the survey in aggregate analysis. Statistical analyses were performed using the nonparametric Mann-Whitney and Kruskal-Wallis tests. P < 0.05 was statistically significant. Two hundred children with their parents were enrolled. Children exposed to parental smoking had higher COHb levels than the children of nonsmoking controls. Higher COHb values were seen in the youngest children, ages 1-2, exposed to parental cigarette smoke. However, these trends did not reach statistical significance, and confidence intervals were wide. This study revealed interesting trends of COHb levels in children presenting for anesthesia and surgery. However, the COHb levels measured in our patients were close to the error margin of the device used in our study. An expected improvement in measurement technology may allow screening children for potential pulmonary perioperative risk factors in the future.

  6. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    PubMed

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  7. Experimental design, power and sample size for animal reproduction experiments.

    PubMed

    Chapman, Phillip L; Seidel, George E

    2008-01-01

    The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.

  8. Risk factors for persistent gestational trophoblastic neoplasia.

    PubMed

    Kuyumcuoglu, Umur; Guzel, Ali Irfan; Erdemoglu, Mahmut; Celik, Yusuf

    2011-01-01

    This retrospective study evaluated the risk factors for persistent gestational trophoblastic disease (GTN) and determined their odds ratios. This study included 100 cases with GTN admitted to our clinic. Possible risk factors recorded were age, gravidity, parity, size of the neoplasia, and beta-human chorionic gonadotropin levels (beta-hCG) before and after the procedure. Statistical analyses consisted of the independent sample t-test and logistic regression using the statistical package SPSS ver. 15.0 for Windows (SPSS, Chicago, IL, USA). Twenty of the cases had persistent GTN, and the differences between these and the others cases were evaluated. The size of the neoplasia and histopathological type of GTN had no statistical relationship with persistence, whereas age, gravidity, and beta-hCG levels were significant risk factors for persistent GTN (p < 0.05). The odds ratios (95% confidence interval (CI)) for age, gravidity, and pre- and post-evacuation beta-hCG levels determined using logistic regression were 4.678 (0.97-22.44), 7.315 (1.16-46.16), 2.637 (1.41-4.94), and 2.339 (1.52-3.60), respectively. Patient age, gravidity, and beta-hCG levels were risk factors for persistent GTN, whereas the size of the neoplasia and histopathological type of GTN were not significant risk factors.

  9. Stability in the metamemory realism of eyewitness confidence judgments.

    PubMed

    Buratti, Sandra; Allwood, Carl Martin; Johansson, Marcus

    2014-02-01

    The stability of eyewitness confidence judgments over time in regard to their reported memory and accuracy of these judgments is of interest in forensic contexts because witnesses are often interviewed many times. The present study investigated the stability of the confidence judgments of memory reports of a witnessed event and of the accuracy of these judgments over three occasions, each separated by 1 week. Three age groups were studied: younger children (8-9 years), older children (10-11 years), and adults (19-31 years). A total of 93 participants viewed a short film clip and were asked to answer directed two-alternative forced-choice questions about the film clip and to confidence judge each answer. Different questions about details in the film clip were used on each of the three test occasions. Confidence as such did not exhibit stability over time on an individual basis. However, the difference between confidence and proportion correct did exhibit stability across time, in terms of both over/underconfidence and calibration. With respect to age, the adults and older children exhibited more stability than the younger children for calibration. Furthermore, some support for instability was found with respect to the difference between the average confidence level for correct and incorrect answers (slope). Unexpectedly, however, the younger children's slope was found to be more stable than the adults. Compared to the previous research, the present study's use of more advanced statistical methods provides a more nuanced understanding of the stability of confidence judgments in the eyewitness reports of children and adults.

  10. Stapled versus handsewn methods for colorectal anastomosis surgery.

    PubMed

    Lustosa, S A; Matos, D; Atallah, A N; Castro, A A

    2001-01-01

    Randomized controlled trials comparing stapled with handsewn colorectal anastomosis have not shown either technique to be superior, perhaps because individual studies lacked statistical power. A systematic review, with pooled analysis of results, might provide a more definitive answer. To compare the safety and effectiveness of stapled and handsewn colorectal anastomosis. The following primary hypothesis was tested: the stapled technique is more effective because it decreases the level of complications. The RCT register of the Cochrane Review Group was searched for any trial or reference to a relevant trial (published, in-press, or in progress). All publications were sought through computerised searches of EMBASE, LILACS, MEDLINE, the Cochrane Controlled Clinical Trials Database, and through letters to industrial companies and authors. There were no limits upon language, date, or other criteria. All randomized clinical trials (RCTs) in which stapled and handsewn colorectal anastomosis were compared. Adult patients submitted electively to colorectal anastomosis. Endoluminal circular stapler and handsewn colorectal anastomosis. a) Mortality b) Overall Anastomotic Dehiscence c) Clinical Anastomotic Dehiscence d) Radiological Anastomotic Dehiscence e) Stricture f) Anastomotic Haemorrhage g) Reoperation h) Wound Infection i) Anastomosis Duration j) Hospital Stay. Data were independently extracted by the two reviewers (SASL, DM) and cross-checked. The methodological quality of each trial was assessed by the same two reviewers. Details of the randomization (generation and concealment), blinding, whether an intention-to-treat analysis was done, and the number of patients lost to follow-up were recorded. The results of each RCT were summarised on an intention-to-treat basis in 2 x 2 tables for each outcome. External validity was defined by characteristics of the participants, the interventions and the outcomes. The RCTs were stratified according to the level of colorectal anastomosis. The Risk Difference method (random effects model) and NNT for dichotomous outcomes measures and weighted mean difference for continuous outcomes measures, with the corresponding 95% confidence interval, were presented in this review. Statistical heterogeneity was evaluated by using funnel plot and chi-square testing. Of the 1233 patients enrolled ( in 9 trials), 622 were treated with stapled, and 611 with manual, suture. The following main results were obtained: a) Mortality: result based on 901 patients; Risk Difference - 0.6% Confidence Interval -2.8% to +1.6%. b) Overall Dehiscence: result based on 1233 patients; Risk Difference 0.2%, 95% Confidence Interval -5.0% to +5.3%. c) Clinical Anastomotic Dehiscence : result based on 1233 patients; Risk Difference -1.4%, 95% Confidence Interval -5.2 to +2.3%. d) Radiological Anastomotic Dehiscence : result based on 825 patients; Risk Difference 1.2%, 95% Confidence Interval -4.8% to +7.3%. e) Stricture: result based on 1042 patients; Risk Difference 4.6%, 95% Confidence Interval 1.2% to 8.1%. Number needed to treat 17, 95% confidence interval 12 to 31. f) Anastomotic Hemorrhage: result based on 662 patients; Risk Difference 2.7%, 95% Confidence Interval - 0.1% to +5.5%. g) Reoperation: result based on 544 patients; Risk Difference 3.9%, 95% Confidence Interval 0.3% to 7.4%. h) Wound Infection: result based on 567 patients; Risk Difference 1.0%, 95% Confidence Interval -2.2% to +4.3%. i) Anastomosis duration: result based on one study (159 patients); Weighted Mean Difference -7.6 minutes, 95% Confidence Interval -12.9 to -2.2 minutes. j) Hospital Stay: result based on one study (159 patients), Weighted Mean Difference 2.0 days, 95% Confidence Interval -3.27 to +7.2 days. The evidence found was insufficient to demonstrate any superiority of stapled over handsewn techniques in colorectal anastomosis, regardless of the level of anastomosis.

  11. Compliance with removable orthodontic appliances.

    PubMed

    Shah, Nirmal

    2017-12-22

    Data sourcesMedline via OVID, PubMed, Cochrane Central Register of Controlled Trials, Web of Science Core Collection, LILACS and BBO databases. Unpublished clinical trials accessed using ClinicalTrials.gov, National Research Register, ProQuest Dissertation and Thesis database.Study selectionTwo authors searched studies from inception until May 2016 without language restrictions. Quantitative and qualitative studies incorporating objective data on compliance with removable appliances, barriers to appliance wear compliance, and interventions to improve compliance were included.Data extraction and synthesisQuality of research was assessed using the Cochrane Collaboration's risk of bias tool, the risk of bias in non-randomised studies of interventions (ROBINS-I), and the mixed methods appraisal tool. Statistical heterogeneity was investigated by examining a graphic display of the estimated compliance levels in conjunction with 95% confidence intervals and quantified using the I-squared statistic. A weighted estimate of objective compliance levels for different appliances in relation to stipulated wear and self-reported levels was also calculated. Risk of publication bias was assessed using funnel plots. Meta-regression was undertaken to assess the relative effects of appliance type on compliance levels.ResultsTwenty-four studies met the inclusion criteria. Of these, 11 were included in the quantitative synthesis. The mean duration of objectively measured wear was considerably lower than stipulated wear time amongst all appliances. Headgear had the greatest discrepancy (5.81 hours, 95% confidence interval, 4.98, 6.64). Self-reported wear time was consistently higher than objectively measured wear time amongst all appliances. Headgear had the greatest discrepancy (5.02 hours, 95% confidence interval, 3.64, 6.40). Two studies found an increase in compliance with headgear and Hawley retainers when patients were aware of monitoring. Five studies found younger age groups to be more compliant than older groups. Three studies also found compliance to be better in the early stages of treatment. Integration between quantitative and qualitative studies was not possible.ConclusionsCompliance with removable orthodontic appliances is suboptimal. Patients wear appliances for considerably less time than stipulated and self-reported. Compliance may be increased when patients are aware of monitoring; however, further research is required to identify effective interventions and possible barriers in order to improve removable orthodontic appliance compliance.

  12. Vitamin D Status at Birth and Future Risk of Attention Deficit/Hyperactivity Disorder (ADHD).

    PubMed

    Gustafsson, Peik; Rylander, Lars; Lindh, Christian H; Jönsson, Bo A G; Ode, Amanda; Olofsson, Per; Ivarsson, Sten A; Rignell-Hydbom, Anna; Haglund, Nils; Källén, Karin

    2015-01-01

    To investigate whether children with Attention Deficit/Hyperactivity Disorder have lower levels of Vitamin D3 at birth than matched controls. Umbilical cord blood samples collected at birth from 202 children later diagnosed with Attention Deficit/Hyperactivity Disorder were analysed for vitamin D content and compared with 202 matched controls. 25-OH vitamin D3 was analysed by liquid chromatography tandem mass spectrometry. No differences in cord blood vitamin D concentration were found between children with Attention Deficit/Hyperactivity Disorder (median 13.0 ng/ml) and controls (median 13.5 ng/ml) (p = 0.43). In a logistic regression analysis, Attention Deficit/Hyperactivity Disorder showed a significant association with maternal age (odds ratio: 0.96, 95% confidence interval: 0.92-0.99) but not with vitamin D levels (odds ratio: 0.99, 95% confidence interval: 0.97-1.02). We found no difference in intrauterine vitamin D levels between children later developing Attention Deficit/Hyperactivity Disorder and matched control children. However, the statistical power of the study was too weak to detect an eventual small to medium size association between vitamin D levels and Attention Deficit/Hyperactivity Disorder.

  13. 40 CFR Appendix C to Part 60 - Determination of Emission Rate Change

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... emission rate to the atmosphere. The method used is the Student's t test, commonly used to make inferences.... EC01JN92.294 3.5 Calculate the test statistic, t, using Equation 4. EC01JN92.295 4.Results 4.1If E b>,E a... occurred. Table 1 Degrees of freedom (n a=n b−2) t′ (95 percent confidence level) 2 2.920 3 2.353 4 2.132 5...

  14. Modeling of a Robust Confidence Band for the Power Curve of a Wind Turbine.

    PubMed

    Hernandez, Wilmar; Méndez, Alfredo; Maldonado-Correa, Jorge L; Balleteros, Francisco

    2016-12-07

    Having an accurate model of the power curve of a wind turbine allows us to better monitor its operation and planning of storage capacity. Since wind speed and direction is of a highly stochastic nature, the forecasting of the power generated by the wind turbine is of the same nature as well. In this paper, a method for obtaining a robust confidence band containing the power curve of a wind turbine under test conditions is presented. Here, the confidence band is bound by two curves which are estimated using parametric statistical inference techniques. However, the observations that are used for carrying out the statistical analysis are obtained by using the binning method, and in each bin, the outliers are eliminated by using a censorship process based on robust statistical techniques. Then, the observations that are not outliers are divided into observation sets. Finally, both the power curve of the wind turbine and the two curves that define the robust confidence band are estimated using each of the previously mentioned observation sets.

  15. Modeling of a Robust Confidence Band for the Power Curve of a Wind Turbine

    PubMed Central

    Hernandez, Wilmar; Méndez, Alfredo; Maldonado-Correa, Jorge L.; Balleteros, Francisco

    2016-01-01

    Having an accurate model of the power curve of a wind turbine allows us to better monitor its operation and planning of storage capacity. Since wind speed and direction is of a highly stochastic nature, the forecasting of the power generated by the wind turbine is of the same nature as well. In this paper, a method for obtaining a robust confidence band containing the power curve of a wind turbine under test conditions is presented. Here, the confidence band is bound by two curves which are estimated using parametric statistical inference techniques. However, the observations that are used for carrying out the statistical analysis are obtained by using the binning method, and in each bin, the outliers are eliminated by using a censorship process based on robust statistical techniques. Then, the observations that are not outliers are divided into observation sets. Finally, both the power curve of the wind turbine and the two curves that define the robust confidence band are estimated using each of the previously mentioned observation sets. PMID:27941604

  16. Prediction model for peninsular Indian summer monsoon rainfall using data mining and statistical approaches

    NASA Astrophysics Data System (ADS)

    Vathsala, H.; Koolagudi, Shashidhar G.

    2017-01-01

    In this paper we discuss a data mining application for predicting peninsular Indian summer monsoon rainfall, and propose an algorithm that combine data mining and statistical techniques. We select likely predictors based on association rules that have the highest confidence levels. We then cluster the selected predictors to reduce their dimensions and use cluster membership values for classification. We derive the predictors from local conditions in southern India, including mean sea level pressure, wind speed, and maximum and minimum temperatures. The global condition variables include southern oscillation and Indian Ocean dipole conditions. The algorithm predicts rainfall in five categories: Flood, Excess, Normal, Deficit and Drought. We use closed itemset mining, cluster membership calculations and a multilayer perceptron function in the algorithm to predict monsoon rainfall in peninsular India. Using Indian Institute of Tropical Meteorology data, we found the prediction accuracy of our proposed approach to be exceptionally good.

  17. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less

  18. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    DOE PAGES

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    2016-12-28

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less

  19. Optimization and validation of a method for the determination of the refractive index of milk serum based on the reaction between milk and copper(II) sulfate to detect milk dilutions.

    PubMed

    Rezende, Patrícia Sueli; Carmo, Geraldo Paulo do; Esteves, Eduardo Gonçalves

    2015-06-01

    We report the use of a method to determine the refractive index of copper(II) serum (RICS) in milk as a tool to detect the fraudulent addition of water. This practice is highly profitable, unlawful, and difficult to deter. The method was optimized and validated and is simple, fast and robust. The optimized method yielded statistically equivalent results compared to the reference method with an accuracy of 0.4% and quadrupled analytical throughput. Trueness, precision (repeatability and intermediate precision) and ruggedness are determined to be satisfactory at a 95.45% confidence level. The expanded uncertainty of the measurement was ±0.38°Zeiss at the 95.45% confidence level (k=3.30), corresponding to 1.03% of the minimum measurement expected in adequate samples (>37.00°Zeiss). Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Analysis of ground water by different laboratories: a comparison of chloride and nitrate data, Nassau and Suffolk counties, New York

    USGS Publications Warehouse

    Katz, Brian G.; Krulikas, Richard K.

    1979-01-01

    Water samples from wells in Nassau and Suffolk Counties were analyzed for chloride and nitrate. Two samples were collected at each well; one was analyzed by the U.S. Geological Survey, the other by a laboratory in the county from which the sample was taken. Results were compared statistically by paired-sample t-test to indicate the degree of uniformity among laboratory results. Chloride analyses from one of the three county laboratories differed significantly (0.95 confidence level) from that of a Geological Survey laboratory. For nitrate analyses, a significant difference (0.95 confidence level) was noted between results from two of the three county laboratories and the Geological Survey laboratory. The lack of uniformity among results reported by the participating laboratories indicates a need for continuing participation in a quality-assurance program and exercise of strong quality control from time of sample collection through analysis so that differences can be evaluated. (Kosco-USGS)

  1. Adverse Outcome Pathways for Regulatory Applications: Examination of Four Case Studies With Different Degrees of Completeness and Scientific Confidence.

    PubMed

    Perkins, Edward J; Antczak, Philipp; Burgoon, Lyle; Falciani, Francesco; Garcia-Reyero, Natàlia; Gutsell, Steve; Hodges, Geoff; Kienzler, Aude; Knapen, Dries; McBride, Mary; Willett, Catherine

    2015-11-01

    Adverse outcome pathways (AOPs) offer a pathway-based toxicological framework to support hazard assessment and regulatory decision-making. However, little has been discussed about the scientific confidence needed, or how complete a pathway should be, before use in a specific regulatory application. Here we review four case studies to explore the degree of scientific confidence and extent of completeness (in terms of causal events) that is required for an AOP to be useful for a specific purpose in a regulatory application: (i) Membrane disruption (Narcosis) leading to respiratory failure (low confidence), (ii) Hepatocellular proliferation leading to cancer (partial pathway, moderate confidence), (iii) Covalent binding to proteins leading to skin sensitization (high confidence), and (iv) Aromatase inhibition leading to reproductive dysfunction in fish (high confidence). Partially complete AOPs with unknown molecular initiating events, such as 'Hepatocellular proliferation leading to cancer', were found to be valuable. We demonstrate that scientific confidence in these pathways can be increased though the use of unconventional information (eg, computational identification of potential initiators). AOPs at all levels of confidence can contribute to specific uses. A significant statistical or quantitative relationship between events and/or the adverse outcome relationships is a common characteristic of AOPs, both incomplete and complete, that have specific regulatory uses. For AOPs to be useful in a regulatory context they must be at least as useful as the tools that regulators currently possess, or the techniques currently employed by regulators. Published by Oxford University Press on behalf of the Society of Toxicology 2015. This work is written by US Government employees and is in the public domain in the US.

  2. Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Brunett, Acacia

    2015-04-26

    The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less

  3. Serum albumin levels in burn people are associated to the total body surface burned and the length of hospital stay but not to the initiation of the oral/enteral nutrition

    PubMed Central

    Pérez-Guisado, Joaquín; de Haro-Padilla, Jesús M; Rioja, Luis F; DeRosier, Leo C; de la Torre, Jorge I

    2013-01-01

    Objective: Serum albumin levels have been used to evaluate the severity of the burns and the nutrition protein status in burn people, specifically in the response of the burn patient to the nutrition. Although it hasn’t been proven if all these associations are fully funded. The aim of this retrospective study was to determine the relationship of serum albumin levels at 3-7 days after the burn injury, with the total body surface area burned (TBSA), the length of hospital stay (LHS) and the initiation of the oral/enteral nutrition (IOEN). Subject and methods: It was carried out with the health records of patients that accomplished the inclusion criteria and were admitted to the burn units at the University Hospital of Reina Sofia (Córdoba, Spain) and UAB Hospital at Birmingham (Alabama, USA) over a 10 years period, between January 2000 and December 2009. We studied the statistical association of serum albumin levels with the TBSA, LHS and IOEN by ANOVA one way test. The confidence interval chosen for statistical differences was 95%. Duncan’s test was used to determine the number of statistically significantly groups. Results: Were expressed as mean±standard deviation. We found serum albumin levels association with TBSA and LHS, with greater to lesser serum albumin levels found associated to lesser to greater TBSA and LHS. We didn’t find statistical association with IOEN. Conclusion: We conclude that serum albumin levels aren’t a nutritional marker in burn people although they could be used as a simple clinical tool to identify the severity of the burn wounds represented by the total body surface area burned and the lenght of hospital stay. PMID:23875122

  4. Serum albumin levels in burn people are associated to the total body surface burned and the length of hospital stay but not to the initiation of the oral/enteral nutrition.

    PubMed

    Pérez-Guisado, Joaquín; de Haro-Padilla, Jesús M; Rioja, Luis F; Derosier, Leo C; de la Torre, Jorge I

    2013-01-01

    Serum albumin levels have been used to evaluate the severity of the burns and the nutrition protein status in burn people, specifically in the response of the burn patient to the nutrition. Although it hasn't been proven if all these associations are fully funded. The aim of this retrospective study was to determine the relationship of serum albumin levels at 3-7 days after the burn injury, with the total body surface area burned (TBSA), the length of hospital stay (LHS) and the initiation of the oral/enteral nutrition (IOEN). It was carried out with the health records of patients that accomplished the inclusion criteria and were admitted to the burn units at the University Hospital of Reina Sofia (Córdoba, Spain) and UAB Hospital at Birmingham (Alabama, USA) over a 10 years period, between January 2000 and December 2009. We studied the statistical association of serum albumin levels with the TBSA, LHS and IOEN by ANOVA one way test. The confidence interval chosen for statistical differences was 95%. Duncan's test was used to determine the number of statistically significantly groups. Were expressed as mean±standard deviation. We found serum albumin levels association with TBSA and LHS, with greater to lesser serum albumin levels found associated to lesser to greater TBSA and LHS. We didn't find statistical association with IOEN. We conclude that serum albumin levels aren't a nutritional marker in burn people although they could be used as a simple clinical tool to identify the severity of the burn wounds represented by the total body surface area burned and the lenght of hospital stay.

  5. Generalized SAMPLE SIZE Determination Formulas for Investigating Contextual Effects by a Three-Level Random Intercept Model.

    PubMed

    Usami, Satoshi

    2017-03-01

    Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.

  6. Statistical analysis of the calibration procedure for personnel radiation measurement instruments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, W.J.; Bengston, S.J.; Kalbeitzer, F.L.

    1980-11-01

    Thermoluminescent analyzer (TLA) calibration procedures were used to estimate personnel radiation exposure levels at the Idaho National Engineering Laboratory (INEL). A statistical analysis is presented herein based on data collected over a six month period in 1979 on four TLA's located in the Department of Energy (DOE) Radiological and Environmental Sciences Laboratory at the INEL. The data were collected according to the day-to-day procedure in effect at that time. Both gamma and beta radiation models are developed. Observed TLA readings of thermoluminescent dosimeters are correlated with known radiation levels. This correlation is then used to predict unknown radiation doses frommore » future analyzer readings of personnel thermoluminescent dosimeters. The statistical techniques applied in this analysis include weighted linear regression, estimation of systematic and random error variances, prediction interval estimation using Scheffe's theory of calibration, the estimation of the ratio of the means of two normal bivariate distributed random variables and their corresponding confidence limits according to Kendall and Stuart, tests of normality, experimental design, a comparison between instruments, and quality control.« less

  7. Statistics Refresher for Molecular Imaging Technologists, Part 2: Accuracy of Interpretation, Significance, and Variance.

    PubMed

    Farrell, Mary Beth

    2018-06-01

    This article is the second part of a continuing education series reviewing basic statistics that nuclear medicine and molecular imaging technologists should understand. In this article, the statistics for evaluating interpretation accuracy, significance, and variance are discussed. Throughout the article, actual statistics are pulled from the published literature. We begin by explaining 2 methods for quantifying interpretive accuracy: interreader and intrareader reliability. Agreement among readers can be expressed simply as a percentage. However, the Cohen κ-statistic is a more robust measure of agreement that accounts for chance. The higher the κ-statistic is, the higher is the agreement between readers. When 3 or more readers are being compared, the Fleiss κ-statistic is used. Significance testing determines whether the difference between 2 conditions or interventions is meaningful. Statistical significance is usually expressed using a number called a probability ( P ) value. Calculation of P value is beyond the scope of this review. However, knowing how to interpret P values is important for understanding the scientific literature. Generally, a P value of less than 0.05 is considered significant and indicates that the results of the experiment are due to more than just chance. Variance, standard deviation (SD), confidence interval, and standard error (SE) explain the dispersion of data around a mean of a sample drawn from a population. SD is commonly reported in the literature. A small SD indicates that there is not much variation in the sample data. Many biologic measurements fall into what is referred to as a normal distribution taking the shape of a bell curve. In a normal distribution, 68% of the data will fall within 1 SD, 95% will fall within 2 SDs, and 99.7% will fall within 3 SDs. Confidence interval defines the range of possible values within which the population parameter is likely to lie and gives an idea of the precision of the statistic being measured. A wide confidence interval indicates that if the experiment were repeated multiple times on other samples, the measured statistic would lie within a wide range of possibilities. The confidence interval relies on the SE. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

  8. Reassessment of urbanization effect on surface air temperature trends at an urban station of North China

    NASA Astrophysics Data System (ADS)

    Bian, Tao; Ren, Guoyu

    2017-11-01

    Based on a homogenized data set of monthly mean temperature, minimum temperature, and maximum temperature at Shijiazhuang City Meteorological Station (Shijiazhuang station) and four rural meteorological stations selected applying a more sophisticated methodology, we reanalyzed the urbanization effects on annual, seasonal, and monthly mean surface air temperature (SAT) trends for updated time period 1960-2012 at the typical urban station in North China. The results showed that (1) urbanization effects on the long-term trends of annual mean SAT, minimum SAT, and diurnal temperature range (DTR) in the last 53 years reached 0.25, 0.47, and - 0.50 °C/decade, respectively, all statistically significant at the 0.001 confidence level, with the contributions from urbanization effects to the overall long-term trends reaching 67.8, 78.6, and 100%, respectively; (2) the urbanization effects on the trends of seasonal mean SAT, minimum SAT, and DTR were also large and statistically highly significant. Except for November and December, the urbanization effects on monthly mean SAT, minimum SAT, and DTR were also all statistically significant at the 0.05 confidence level; and (3) the annual, seasonal, and monthly mean maximum SAT series at the urban station registered a generally weaker and non-significant urbanization effect. The updated analysis evidenced that our previous work for this same urban station had underestimated the urbanization effect and its contribution to the overall changes in the SAT series. Many similar urban stations were being included in the current national and regional SAT data sets, and the results of this paper further indicated the importance and urgency for paying more attention to the urbanization bias in the monitoring and detection of global and regional SAT change based on the data sets.

  9. Autonomous motivation mediates the relation between goals for physical activity and physical activity behavior in adolescents.

    PubMed

    Duncan, Michael J; Eyre, Emma Lj; Bryant, Elizabeth; Seghers, Jan; Galbraith, Niall; Nevill, Alan M

    2017-04-01

    Overall, 544 children (mean age ± standard deviation = 14.2 ± .94 years) completed self-report measures of physical activity goal content, behavioral regulations, and physical activity behavior. Body mass index was determined from height and mass. The indirect effect of intrinsic goal content on physical activity was statistically significant via autonomous ( b = 162.27; 95% confidence interval [89.73, 244.70]), but not controlled motivation ( b = 5.30; 95% confidence interval [-39.05, 45.16]). The indirect effect of extrinsic goal content on physical activity was statistically significant via autonomous ( b = 106.25; 95% confidence interval [63.74, 159.13]) but not controlled motivation ( b = 17.28; 95% confidence interval [-31.76, 70.21]). Weight status did not alter these findings.

  10. [Effects of Self-directed Feedback Practice using Smartphone Videos on Basic Nursing Skills, Confidence in Performance and Learning Satisfaction].

    PubMed

    Lee, Seul Gi; Shin, Yun Hee

    2016-04-01

    This study was done to verify effects of a self-directed feedback practice using smartphone videos on nursing students' basic nursing skills, confidence in performance and learning satisfaction. In this study an experimental study with a post-test only control group design was used. Twenty-nine students were assigned to the experimental group and 29 to the control group. Experimental treatment was exchanging feedback on deficiencies through smartphone recorded videos of nursing practice process taken by peers during self-directed practice. Basic nursing skills scores were higher for all items in the experimental group compared to the control group, and differences were statistically significant ["Measuring vital signs" (t=-2.10, p=.039); "Wearing protective equipment when entering and exiting the quarantine room and the management of waste materials" (t=-4.74, p<.001) "Gavage tube feeding" (t=-2.70, p=.009)]. Confidence in performance was higher in the experimental group compared to the control group, but the differences were not statistically significant. However, after the complete practice, there was a statistically significant difference in overall performance confidence (t=-3.07. p=.003). Learning satisfaction was higher in the experimental group compared to the control group, but the difference was not statistically significant (t=-1.67, p=.100). Results of this study indicate that self-directed feedback practice using smartphone videos can improve basic nursing skills. The significance is that it can help nursing students gain confidence in their nursing skills for the future through improvement of basic nursing skills and performance of quality care, thus providing patients with safer care.

  11. Associations between age-related nuclear cataract and lutein and zeaxanthin in the diet and serum in the Carotenoids in the Age-Related Eye Disease Study, an Ancillary Study of the Women's Health Initiative.

    PubMed

    Moeller, Suzen M; Voland, Rick; Tinker, Lesley; Blodi, Barbara A; Klein, Michael L; Gehrs, Karen M; Johnson, Elizabeth J; Snodderly, D Max; Wallace, Robert B; Chappell, Richard J; Parekh, Niyati; Ritenbaugh, Cheryl; Mares, Julie A

    2008-03-01

    To evaluate associations between nuclear cataract (determined from slitlamp photographs between May 2001 and January 2004) and lutein and zeaxanthin in the diet and serum in patients between 1994 and 1998 and macula between 2001 and 2004. A total of 1802 women aged 50 to 79 years in Iowa, Wisconsin, and Oregon with intakes of lutein and zeaxanthin above the 78th (high) and below the 28th (low) percentiles in the Women's Health Initiative Observational Study (1994-1998) were recruited 4 to 7 years later (2001-2004) into the Carotenoids in Age-Related Eye Disease Study. Women in the group with high dietary levels of lutein and zeaxanthin had a 23% lower prevalence of nuclear cataract (age-adjusted odds ratio, 0.77; 95% confidence interval, 0.62-0.96) compared with those with low levels. Multivariable adjustment slightly attenuated the association (odds ratio, 0.81; 95% confidence interval, 0.65-1.01). Women in the highest quintile category of diet or serum levels of lutein and zeaxanthin as compared with those in the lowest quintile category were 32% less likely to have nuclear cataract (multivariable-adjusted odds ratio, 0.68; 95% confidence interval, 0.48-0.97; P for trend = .04; and multivariable-adjusted odds ratio, 0.68; 95% confidence interval, 0.47-0.98; P for trend = .01, respectively). Cross-sectional associations with macular pigment density were inverse but not statistically significant. Diets rich in lutein and zeaxanthin are moderately associated with decreased prevalence of nuclear cataract in older women. However, other protective aspects of such diets may in part explain these relationships.

  12. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies

    PubMed Central

    Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.

    2016-01-01

    Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849

  13. Effects of drilling fluids on soils and plants: I. Individual fluid components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, R.W.; Honarvar, S.; Hunsaker, B.

    1980-01-01

    The effects of 31 drilling fluid (drilling mud) components on the growth of green beans (Phaseolus vulgaris L., Tendergreen) and sweet corn (Zea may var. saccharata (Sturtev.) Bailey, Northrup King 199) were evaluated in greenhouse studies. Plants grew well in fertile Dagor silt loam soil (Cumulic Haploxeroll) when the soil was mixed with most soil-component mixtures at disposal proportions normally expected. Vinyl acetate and maleic acid polymer (VAMA) addition caused significantly increased growth at the 95% confidence level. No statistically significant depression of plant growth occurred at normal rates with asbestos, asphalt, barite, bentonite, calcium lignosulfonate, sodium polyacrylate, a modifiedmore » tannin, ethoxylated nonylphenol, a filming amine, gilsonite, a Xanthan gum, paraformaldehyde, a pipe dope, hydrolized polyacrylamide, sodium acid pyrophosphate, sodium carboxymethyl cellulose, sodium hydroxide added as pellets, and a sulfonated tall oil. Statistically significant reductions in plant yields (at the 95% confidence level) occurred at normal disposal rates with a long-chained aliphatic alcohol, sodium dichromate, diesel oil, guar gum, an iron chromelignosulfonate, lignite, a modified asphalt, a plant fibersynthetic fiber mixture, lignite, a nonfermenting starch, potassium chloride, pregelatinized starch, and sulfated triglyceride. Thirteen drilling fluid components added individually to a fluid base (water, bentonite, and barite) and then to soil were also tested for their effect on plant growth. Only the sulfated triglyceride (Torq-Trim) and the long-chain (high molecular weight) alcohol (Drillaid 405) caused no plant growth reductions at either rate added. The modified tannin (Desco) caused minimal reduction in bean growth only when added to soil in excess levels.« less

  14. In silico model-based inference: a contemporary approach for hypothesis testing in network biology

    PubMed Central

    Klinke, David J.

    2014-01-01

    Inductive inference plays a central role in the study of biological systems where one aims to increase their understanding of the system by reasoning backwards from uncertain observations to identify causal relationships among components of the system. These causal relationships are postulated from prior knowledge as a hypothesis or simply a model. Experiments are designed to test the model. Inferential statistics are used to establish a level of confidence in how well our postulated model explains the acquired data. This iterative process, commonly referred to as the scientific method, either improves our confidence in a model or suggests that we revisit our prior knowledge to develop a new model. Advances in technology impact how we use prior knowledge and data to formulate models of biological networks and how we observe cellular behavior. However, the approach for model-based inference has remained largely unchanged since Fisher, Neyman and Pearson developed the ideas in the early 1900’s that gave rise to what is now known as classical statistical hypothesis (model) testing. Here, I will summarize conventional methods for model-based inference and suggest a contemporary approach to aid in our quest to discover how cells dynamically interpret and transmit information for therapeutic aims that integrates ideas drawn from high performance computing, Bayesian statistics, and chemical kinetics. PMID:25139179

  15. In silico model-based inference: a contemporary approach for hypothesis testing in network biology.

    PubMed

    Klinke, David J

    2014-01-01

    Inductive inference plays a central role in the study of biological systems where one aims to increase their understanding of the system by reasoning backwards from uncertain observations to identify causal relationships among components of the system. These causal relationships are postulated from prior knowledge as a hypothesis or simply a model. Experiments are designed to test the model. Inferential statistics are used to establish a level of confidence in how well our postulated model explains the acquired data. This iterative process, commonly referred to as the scientific method, either improves our confidence in a model or suggests that we revisit our prior knowledge to develop a new model. Advances in technology impact how we use prior knowledge and data to formulate models of biological networks and how we observe cellular behavior. However, the approach for model-based inference has remained largely unchanged since Fisher, Neyman and Pearson developed the ideas in the early 1900s that gave rise to what is now known as classical statistical hypothesis (model) testing. Here, I will summarize conventional methods for model-based inference and suggest a contemporary approach to aid in our quest to discover how cells dynamically interpret and transmit information for therapeutic aims that integrates ideas drawn from high performance computing, Bayesian statistics, and chemical kinetics. © 2014 American Institute of Chemical Engineers.

  16. Evaluating the efficiency of environmental monitoring programs

    USGS Publications Warehouse

    Levine, Carrie R.; Yanai, Ruth D.; Lampman, Gregory G.; Burns, Douglas A.; Driscoll, Charles T.; Lawrence, Gregory B.; Lynch, Jason; Schoch, Nina

    2014-01-01

    Statistical uncertainty analyses can be used to improve the efficiency of environmental monitoring, allowing sampling designs to maximize information gained relative to resources required for data collection and analysis. In this paper, we illustrate four methods of data analysis appropriate to four types of environmental monitoring designs. To analyze a long-term record from a single site, we applied a general linear model to weekly stream chemistry data at Biscuit Brook, NY, to simulate the effects of reducing sampling effort and to evaluate statistical confidence in the detection of change over time. To illustrate a detectable difference analysis, we analyzed a one-time survey of mercury concentrations in loon tissues in lakes in the Adirondack Park, NY, demonstrating the effects of sampling intensity on statistical power and the selection of a resampling interval. To illustrate a bootstrapping method, we analyzed the plot-level sampling intensity of forest inventory at the Hubbard Brook Experimental Forest, NH, to quantify the sampling regime needed to achieve a desired confidence interval. Finally, to analyze time-series data from multiple sites, we assessed the number of lakes and the number of samples per year needed to monitor change over time in Adirondack lake chemistry using a repeated-measures mixed-effects model. Evaluations of time series and synoptic long-term monitoring data can help determine whether sampling should be re-allocated in space or time to optimize the use of financial and human resources.

  17. Abdominal pain endpoints currently recommended by the FDA and EMA for adult patients with irritable bowel syndrome may not be reliable in children.

    PubMed

    Saps, M; Lavigne, J V

    2015-06-01

    The Food and Drug Administration (FDA) recommended ≥30% decrease on patient-reported outcomes for pain be considered clinically significant in clinical trials for adults with irritable bowel syndrome. This percent change approach may not be appropriate for children. We compared three alternate approaches to determining clinically significant reductions in pain among children. 80 children with functional abdominal pain participated in a study of the efficacy of amitriptyline. Endpoints included patient-reported estimates of feeling better, and pain Visual Analog Scale (VAS). The minimum clinically important difference in pain report was calculated as (i) mean change in VAS score for children reporting being 'better'; (ii) percent changes in pain (≥30% and ≥50%) on the VAS; and (iii) statistically reliable changes on the VAS for 68% and 95% confidence intervals. There was poor agreement between the three approaches. 43.6% of the children who met the FDA ≥30% criterion for clinically significant change did not achieve a reliable level of improvement (95% confidence interval). Children's self-reported ratings of being better may not be statistically reliable. A combined approach in which children must report improvement as better and achieve a statistically significant change may be more appropriate for outcomes in clinical trials. © 2015 John Wiley & Sons Ltd.

  18. Crash Lethality Model

    DTIC Science & Technology

    2012-06-06

    Statistical Data ........................................................................................... 45 31 Parametric Model for Rotor Wing Debris...Area .............................................................. 46 32 Skid Distance Statistical Data...results. The curve that related the BC value to the probability of skull fracture resulted in a tight confidence interval and a two tailed statistical p

  19. Does dental undergraduate education and postgraduate training enable intention to provide inhalation sedation in primary dental care? A path analytical exploration.

    PubMed

    Yuan, S; J Carson, S; Rooksby, M; McKerrow, J; Lush, C; Humphris, G; Freeman, R

    2017-08-01

    To examine how quality standards of dental undergraduate education, postgraduate training and qualifications together with confidence and barriers could be utilised to predict intention to provide inhalation sedation. All 202 dentists working within primary dental care in NHS Highland were invited to participate. The measures in the questionnaire survey included demographic information, undergraduate education and postgraduate qualifications, current provision and access to sedation service, attitudes towards confidence, barriers and intention to provide inhalation sedation. A path analytical approach was employed to investigate the fit of collected data to the proposed mediational model. One hundred and nine dentists who completed the entire questionnaire participated (response rate of 54%). Seventy-six per cent of dentists reported receiving lectures in conscious sedation during their undergraduate education. Statistically significantly more Public Dental Service dentists compared with General Dental Service (GDS) dentists had postgraduate qualification and Continuing Professional Development training experience in conscious sedation. Only twenty-four per cent of the participants stated that they provided inhalation sedation to their patients. The findings indicated that PDS dentists had higher attitudinal scores towards inhalation sedation than GDS practitioners. The proposed model showed an excellent level of fit. A multigroup comparison test confirmed that the level of association between confidence in providing inhalation sedation and intention varied by group (GDS vs. PDS respondents). Public Dental Service respondents who showed extensive postgraduate training experience in inhalation sedation were more confident and likely to provide this service. The quality standards of dental undergraduate education, postgraduate qualifications and training together with improved confidence predicted primary care dentists' intention to provide inhalation sedation. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Prehospital care training in a rapidly developing economy: a multi-institutional study.

    PubMed

    Vyas, Dinesh; Hollis, Michael; Abraham, Rohit; Rustagi, Neeti; Chandra, Siddharth; Malhotra, Ajai; Rajpurohit, Vikas; Purohit, Harshada; Pal, Ranabir

    2016-06-01

    The trauma pandemic is one of the leading causes of death worldwide but especially in rapidly developing economies. Perhaps, a common cause of trauma-related mortality in these settings comes from the rapid expansion of motor vehicle ownership without the corresponding expansion of national prehospital training in developed countries. The resulting road traffic injuries often never make it to the hospital in time for effective treatment, resulting in preventable disability and death. The current article examines the development of a medical first responder training program that has the potential to reduce this unnecessary morbidity and mortality. An intensive training workshop has been differentiated into two progressive tiers: acute trauma training (ATT) and broad trauma training (BTT) protocols. These four-hour and two-day protocols, respectively, allow for the mass education of laypersons-such as police officials, fire brigade, and taxi and/or ambulance drivers-who are most likely to interact first with prehospital victims. Over 750 ATT participants and 168 BTT participants were trained across three Indian educational institutions at Jodhpur and Jaipur. Trainees were given didactic and hands-on education in a series of critical trauma topics, in addition to pretraining and post-training self-assessments to rate clinical confidence across curricular topics. Two-sample t-test statistical analyses were performed to compare pretraining and post-training confidence levels. Program development resulted in recruitment of a variety of career backgrounds for enrollment in both our ATT and BTT workshops. The workshops were run by local physicians from a wide spectrum of medical specialties and previously ATT-trained police officials. Statistically significant improvements in clinical confidence across all curricular topics for ATT and BTT protocols were identified (P < 0.0001). In addition, improvement in confidence after BTT training was similar in Jodhpur compared with Jaipur. These results suggest a promising level of reliability and reproducibility across different geographic areas in rapidly developing settings. Program expansion can offer an exponential growth in the training rate of medical first responders, which can help curb the trauma-related mortality in rapidly developing economies. Future directions will include clinical competency assessments and further progressive differentiation into higher tiers of trauma expertise. Published by Elsevier Inc.

  1. Statistical analysis of NaOH pretreatment effects on sweet sorghum bagasse characteristics

    NASA Astrophysics Data System (ADS)

    Putri, Ary Mauliva Hada; Wahyuni, Eka Tri; Sudiyani, Yanni

    2017-01-01

    We analyze the behavior of sweet sorghum bagasse characteristics before and after NaOH pretreatments by statistical analysis. These characteristics include the percentages of lignocellulosic materials and the degree of crystallinity. We use the chi-square method to get the values of fitted parameters, and then deploy student's t-test to check whether they are significantly different from zero at 99.73% confidence level (C.L.). We obtain, in the cases of hemicellulose and lignin, that their percentages after pretreatment decrease statistically. On the other hand, crystallinity does not possess similar behavior as the data proves that all fitted parameters in this case might be consistent with zero. Our statistical result is then cross examined with the observations from X-ray diffraction (XRD) and Fourier Transform Infrared (FTIR) Spectroscopy, showing pretty good agreement. This result may indicate that the 10% NaOH pretreatment might not be sufficient in changing the crystallinity index of the sweet sorghum bagasse.

  2. Spectral risk measures: the risk quadrangle and optimal approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kouri, Drew P.

    We develop a general risk quadrangle that gives rise to a large class of spectral risk measures. The statistic of this new risk quadrangle is the average value-at-risk at a specific confidence level. As such, this risk quadrangle generates a continuum of error measures that can be used for superquantile regression. For risk-averse optimization, we introduce an optimal approximation of spectral risk measures using quadrature. Lastly, we prove the consistency of this approximation and demonstrate our results through numerical examples.

  3. Spectral risk measures: the risk quadrangle and optimal approximation

    DOE PAGES

    Kouri, Drew P.

    2018-05-24

    We develop a general risk quadrangle that gives rise to a large class of spectral risk measures. The statistic of this new risk quadrangle is the average value-at-risk at a specific confidence level. As such, this risk quadrangle generates a continuum of error measures that can be used for superquantile regression. For risk-averse optimization, we introduce an optimal approximation of spectral risk measures using quadrature. Lastly, we prove the consistency of this approximation and demonstrate our results through numerical examples.

  4. Confidence Intervals Make a Difference: Effects of Showing Confidence Intervals on Inferential Reasoning

    ERIC Educational Resources Information Center

    Hoekstra, Rink; Johnson, Addie; Kiers, Henk A. L.

    2012-01-01

    The use of confidence intervals (CIs) as an addition or as an alternative to null hypothesis significance testing (NHST) has been promoted as a means to make researchers more aware of the uncertainty that is inherent in statistical inference. Little is known, however, about whether presenting results via CIs affects how readers judge the…

  5. Imaging Depression in Adults with ASD

    DTIC Science & Technology

    2017-10-01

    collected temporally close enough to imaging data in Phase 2 to be confidently incorporated in the planned statistical analyses, and (b) not unduly risk...Phase 2 to be confidently incorporated in the planned statistical analyses, and (b) not unduly risk attrition between Phase 1 and 2, we chose to hold...supervision is ongoing (since 9/2014). • Co-l Dr. Lerner’s 2nd year Clinical Psychology PhD students have participated in ADOS- 2 Introductory Clinical

  6. Calculating Confidence, Uncertainty, and Numbers of Samples When Using Statistical Sampling Approaches to Characterize and Clear Contaminated Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.

    2013-04-27

    This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that amore » decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 2. qualitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0 3. quantitative data (e.g., contaminant concentrations expressed as CFU/cm2) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 4. quantitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0. For Situation 2, the hotspot sampling approach provides for stating with Z% confidence that a hotspot of specified shape and size with detectable contamination will be found. Also for Situation 2, the CJR approach provides for stating with X% confidence that at least Y% of the decision area does not contain detectable contamination. Forms of these statements for the other three situations are discussed in Section 2.2. Statistical methods that account for FNR > 0 currently only exist for the hotspot sampling approach with qualitative data (or quantitative data converted to qualitative data). This report documents the current status of methods and formulas for the hotspot and CJR sampling approaches. Limitations of these methods are identified. Extensions of the methods that are applicable when FNR = 0 to account for FNR > 0, or to address other limitations, will be documented in future revisions of this report if future funding supports the development of such extensions. For quantitative data, this report also presents statistical methods and formulas for 1. quantifying the uncertainty in measured sample results 2. estimating the true surface concentration corresponding to a surface sample 3. quantifying the uncertainty of the estimate of the true surface concentration. All of the methods and formulas discussed in the report were applied to example situations to illustrate application of the methods and interpretation of the results.« less

  7. Exploring students’ perceived and actual ability in solving statistical problems based on Rasch measurement tools

    NASA Astrophysics Data System (ADS)

    Azila Che Musa, Nor; Mahmud, Zamalia; Baharun, Norhayati

    2017-09-01

    One of the important skills that is required from any student who are learning statistics is knowing how to solve statistical problems correctly using appropriate statistical methods. This will enable them to arrive at a conclusion and make a significant contribution and decision for the society. In this study, a group of 22 students majoring in statistics at UiTM Shah Alam were given problems relating to topics on testing of hypothesis which require them to solve the problems using confidence interval, traditional and p-value approach. Hypothesis testing is one of the techniques used in solving real problems and it is listed as one of the difficult concepts for students to grasp. The objectives of this study is to explore students’ perceived and actual ability in solving statistical problems and to determine which item in statistical problem solving that students find difficult to grasp. Students’ perceived and actual ability were measured based on the instruments developed from the respective topics. Rasch measurement tools such as Wright map and item measures for fit statistics were used to accomplish the objectives. Data were collected and analysed using Winsteps 3.90 software which is developed based on the Rasch measurement model. The results showed that students’ perceived themselves as moderately competent in solving the statistical problems using confidence interval and p-value approach even though their actual performance showed otherwise. Item measures for fit statistics also showed that the maximum estimated measures were found on two problems. These measures indicate that none of the students have attempted these problems correctly due to reasons which include their lack of understanding in confidence interval and probability values.

  8. Review and statistical analysis of the use of ultrasonic velocity for estimating the porosity fraction in polycrystalline materials

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Swickard, S. M.; Stang, D. B.; Deguire, M. R.

    1991-01-01

    A review and statistical analysis of the ultrasonic velocity method for estimating the porosity fraction in polycrystalline materials is presented. Initially, a semiempirical model is developed showing the origin of the linear relationship between ultrasonic velocity and porosity fraction. Then, from a compilation of data produced by many researchers, scatter plots of velocity versus percent porosity data are shown for Al2O3, MgO, porcelain-based ceramics, PZT, SiC, Si3N4, steel, tungsten, UO2,(U0.30Pu0.70)C, and YBa2Cu3O(7-x). Linear regression analysis produces predicted slope, intercept, correlation coefficient, level of significance, and confidence interval statistics for the data. Velocity values predicted from regression analysis of fully-dense materials are in good agreement with those calculated from elastic properties.

  9. Review and statistical analysis of the ultrasonic velocity method for estimating the porosity fraction in polycrystalline materials

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Swickard, S. M.; Stang, D. B.; Deguire, M. R.

    1990-01-01

    A review and statistical analysis of the ultrasonic velocity method for estimating the porosity fraction in polycrystalline materials is presented. Initially, a semi-empirical model is developed showing the origin of the linear relationship between ultrasonic velocity and porosity fraction. Then, from a compilation of data produced by many researchers, scatter plots of velocity versus percent porosity data are shown for Al2O3, MgO, porcelain-based ceramics, PZT, SiC, Si3N4, steel, tungsten, UO2,(U0.30Pu0.70)C, and YBa2Cu3O(7-x). Linear regression analysis produced predicted slope, intercept, correlation coefficient, level of significance, and confidence interval statistics for the data. Velocity values predicted from regression analysis for fully-dense materials are in good agreement with those calculated from elastic properties.

  10. Alveolar bone level changes in maxillary expansion treatments assessed through CBCT.

    PubMed

    Pham, Vi; Lagravère, Manuel O

    2017-03-01

    Determine changes in alveolar bone levels during expansion treatments as assessed through cone-beam computer tomography (CBCT). Sixty-one patients from Edmonton, Canada, with maxillary transverse deficiencies were split into three groups. One group was treated with a bone-anchored expander, another group was treated with a tooth-borne maxillary expander (Hyrax) and one group was untreated. CBCTs were obtained from each patient at two time points (initialT 1 and at removal of appliance after 6 months T 2 ). CBCTs were analyzed using AVIZO software and landmarks were placed on different dental and skeletal structures. Intra-examiner reliability for landmarks was done by randomly selecting 10 images and measuring each landmark 3 times, 1 week apart. Descriptive statistics, intraclass correlation coefficients (ICC) and ANOVA analysis were used to determine if there were changes to the alveolar bone levels and if these changes were statistically significant within each group. Landmarks reliability showed an ICC of at least 0.99 with a 95% confidence interval and a mean measurement error of at least 0.2067mm. Descriptive statistics show that changes in alveolar bone levels were less than 1mm for all three groups and therefore clinically insignificant. Changes between groups were not statistically different (P<0.05) from one another with the exception of 8 distances. However, since the distances were small, they were not considered clinically significant. Alveolar bone level changes were similar in maxillary expansion treatments and in the control group. The effects of maxillary expansion treatments on alveolar bone levels are not clinically significant. Copyright © 2016 CEO. Published by Elsevier Masson SAS. All rights reserved.

  11. Statistical Requirements For Pass-Fail Testing Of Contraband Detection Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliam, David M.

    2011-06-01

    Contraband detection systems for homeland security applications are typically tested for probability of detection (PD) and probability of false alarm (PFA) using pass-fail testing protocols. Test protocols usually require specified values for PD and PFA to be demonstrated at a specified level of statistical confidence CL. Based on a recent more theoretical treatment of this subject [1], this summary reviews the definition of CL and provides formulas and spreadsheet functions for constructing tables of general test requirements and for determining the minimum number of tests required. The formulas and tables in this article may be generally applied to many othermore » applications of pass-fail testing, in addition to testing of contraband detection systems.« less

  12. Statistical errors in molecular dynamics averages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiferl, S.K.; Wallace, D.C.

    1985-11-15

    A molecular dynamics calculation produces a time-dependent fluctuating signal whose average is a thermodynamic quantity of interest. The average of the kinetic energy, for example, is proportional to the temperature. A procedure is described for determining when the molecular dynamics system is in equilibrium with respect to a given variable, according to the condition that the mean and the bandwidth of the signal should be sensibly constant in time. Confidence limits for the mean are obtained from an analysis of a finite length of the equilibrium signal. The role of serial correlation in this analysis is discussed. The occurence ofmore » unstable behavior in molecular dynamics data is noted, and a statistical test for a level shift is described.« less

  13. Estimation of integral curves from high angular resolution diffusion imaging (HARDI) data.

    PubMed

    Carmichael, Owen; Sakhanenko, Lyudmila

    2015-05-15

    We develop statistical methodology for a popular brain imaging technique HARDI based on the high order tensor model by Özarslan and Mareci [10]. We investigate how uncertainty in the imaging procedure propagates through all levels of the model: signals, tensor fields, vector fields, and fibers. We construct asymptotically normal estimators of the integral curves or fibers which allow us to trace the fibers together with confidence ellipsoids. The procedure is computationally intense as it blends linear algebra concepts from high order tensors with asymptotical statistical analysis. The theoretical results are illustrated on simulated and real datasets. This work generalizes the statistical methodology proposed for low angular resolution diffusion tensor imaging by Carmichael and Sakhanenko [3], to several fibers per voxel. It is also a pioneering statistical work on tractography from HARDI data. It avoids all the typical limitations of the deterministic tractography methods and it delivers the same information as probabilistic tractography methods. Our method is computationally cheap and it provides well-founded mathematical and statistical framework where diverse functionals on fibers, directions and tensors can be studied in a systematic and rigorous way.

  14. Estimation of integral curves from high angular resolution diffusion imaging (HARDI) data

    PubMed Central

    Carmichael, Owen; Sakhanenko, Lyudmila

    2015-01-01

    We develop statistical methodology for a popular brain imaging technique HARDI based on the high order tensor model by Özarslan and Mareci [10]. We investigate how uncertainty in the imaging procedure propagates through all levels of the model: signals, tensor fields, vector fields, and fibers. We construct asymptotically normal estimators of the integral curves or fibers which allow us to trace the fibers together with confidence ellipsoids. The procedure is computationally intense as it blends linear algebra concepts from high order tensors with asymptotical statistical analysis. The theoretical results are illustrated on simulated and real datasets. This work generalizes the statistical methodology proposed for low angular resolution diffusion tensor imaging by Carmichael and Sakhanenko [3], to several fibers per voxel. It is also a pioneering statistical work on tractography from HARDI data. It avoids all the typical limitations of the deterministic tractography methods and it delivers the same information as probabilistic tractography methods. Our method is computationally cheap and it provides well-founded mathematical and statistical framework where diverse functionals on fibers, directions and tensors can be studied in a systematic and rigorous way. PMID:25937674

  15. Can I Count on Getting Better? Association between Math Anxiety and Poorer Understanding of Medical Risk Reductions.

    PubMed

    Rolison, Jonathan J; Morsanyi, Kinga; O'Connor, Patrick A

    2016-10-01

    Lower numerical ability is associated with poorer understanding of health statistics, such as risk reductions of medical treatment. For many people, despite good numeracy skills, math provokes anxiety that impedes an ability to evaluate numerical information. Math-anxious individuals also report less confidence in their ability to perform math tasks. We hypothesized that, independent of objective numeracy, math anxiety would be associated with poorer responding and lower confidence when calculating risk reductions of medical treatments. Objective numeracy was assessed using an 11-item objective numeracy scale. A 13-item self-report scale was used to assess math anxiety. In experiment 1, participants were asked to interpret the baseline risk of disease and risk reductions associated with treatment options. Participants in experiment 2 were additionally provided a graphical display designed to facilitate the processing of math information and alleviate effects of math anxiety. Confidence ratings were provided on a 7-point scale. Individuals of higher objective numeracy were more likely to respond correctly to baseline risks and risk reductions associated with treatment options and were more confident in their interpretations. Individuals who scored high in math anxiety were instead less likely to correctly interpret the baseline risks and risk reductions and were less confident in their risk calculations as well as in their assessments of the effectiveness of treatment options. Math anxiety predicted confidence levels but not correct responding when controlling for objective numeracy. The graphical display was most effective in increasing confidence among math-anxious individuals. The findings suggest that math anxiety is associated with poorer medical risk interpretation but is more strongly related to confidence in interpretations. © The Author(s) 2015.

  16. The self-consistency model of subjective confidence.

    PubMed

    Koriat, Asher

    2012-01-01

    How do people monitor the correctness of their answers? A self-consistency model is proposed for the process underlying confidence judgments and their accuracy. In answering a 2-alternative question, participants are assumed to retrieve a sample of representations of the question and base their confidence on the consistency with which the chosen answer is supported across representations. Confidence is modeled by analogy to the calculation of statistical level of confidence (SLC) in testing hypotheses about a population and represents the participant's assessment of the likelihood that a new sample will yield the same choice. Assuming that participants draw representations from a commonly shared item-specific population of representations, predictions were derived regarding the function relating confidence to inter-participant consensus and intra-participant consistency for the more preferred (majority) and the less preferred (minority) choices. The predicted pattern was confirmed for several different tasks. The confidence-accuracy relationship was shown to be a by-product of the consistency-correctness relationship: It is positive because the answers that are consistently chosen are generally correct, but negative when the wrong answers tend to be favored. The overconfidence bias stems from the reliability-validity discrepancy: Confidence monitors reliability (or self-consistency), but its accuracy is evaluated in calibration studies against correctness. Simulation and empirical results suggest that response speed is a frugal cue for self-consistency, and its validity depends on the validity of self-consistency in predicting performance. Another mnemonic cue-accessibility, which is the overall amount of information that comes to mind-makes an added, independent contribution. Self-consistency and accessibility may correspond to the 2 parameters that affect SLC: sample variance and sample size.

  17. Phosphorylated neurofilament heavy: A potential blood biomarker to evaluate the severity of acute spinal cord injuries in adults

    PubMed Central

    Singh, Ajai; Kumar, Vineet; Ali, Sabir; Mahdi, Abbas Ali; Srivastava, Rajeshwer Nath

    2017-01-01

    Aims: The aim of this study is to analyze the serial estimation of phosphorylated neurofilament heavy (pNF-H) in blood plasma that would act as a potential biomarker for early prediction of the neurological severity of acute spinal cord injuries (SCI) in adults. Settings and Design: Pilot study/observational study. Subjects and Methods: A total of 40 patients (28 cases and 12 controls) of spine injury were included in this study. In the enrolled cases, plasma level of pNF-H was evaluated in blood samples and neurological evaluation was performed by the American Spinal Injury Association Injury Scale at specified period. Serial plasma neurofilament heavy values were then correlated with the neurological status of these patients during follow-up visits and were analyzed statistically. Statistical Analysis Used: Statistical analysis was performed using GraphPad InStat software (version 3.05 for Windows, San Diego, CA, USA). The correlation analysis between the clinical progression and pNF-H expression was done using Spearman's correlation. Results: The mean baseline level of pNF-H in cases was 6.40 ± 2.49 ng/ml, whereas in controls it was 0.54 ± 0.27 ng/ml. On analyzing the association between the two by Mann–Whitney U–test, the difference in levels was found to be statistically significant. The association between the neurological progression and pNF-H expression was determined using correlation analysis (Spearman's correlation). At 95% confidence interval, the correlation coefficient was found to be 0.64, and the correlation was statistically significant. Conclusions: Plasma pNF-H levels were elevated in accordance with the severity of SCI. Therefore, pNF-H may be considered as a potential biomarker to determine early the severity of SCI in adult patients. PMID:29291173

  18. Association between Serum β2-Microglobulin Level and Infectious Mortality in Hemodialysis Patients

    PubMed Central

    Cheung, Alfred K.; Greene, Tom; Leypoldt, John K.; Yan, Guofen; Allon, Michael; Delmez, James; Levey, Andrew S.; Levin, Nathan W.; Rocco, Michael V.; Schulman, Gerald; Eknoyan, Garabed

    2008-01-01

    Background and objectives: Secondary analysis of the Hemodialysis Study showed that serum β2-microglobulin levels predicted all-cause mortality and that high-flux dialysis was associated with decreased cardiac deaths in hemodialysis patients. This study examined the association of serum β2-microglobulin levels and dialyzer β2-microglobulin kinetics with the two most common causes of deaths: Cardiac and infectious diseases. Cox regression analyses were performed to relate cardiac or infectious deaths to cumulative mean follow-up predialysis serum β2-microglobulin levels while controlling for baseline demographics, comorbidity, residual kidney function, and dialysis-related variables. Results: The cohort of 1813 patients experienced 180 infectious deaths and 315 cardiac deaths. The adjusted hazard ratio for infectious death was 1.21 (95% confidence interval 1.07 to 1.37) per 10-mg/L increase in β2-microglobulin. This association was independent of the prestudy years on dialysis. In contrast, the association between serum β2-microglobulin level and cardiac death was not statistically significant. In similar regression models, higher cumulative mean Kt/V of β2-microglobulin was not significantly associated with either infectious or cardiac mortality in the full cohort but exhibited trends suggesting an association with lower infectious mortality (relative risk 0.93; 95% confidence interval 0.86 to 1.01, for each 0.1-U increase in β2-microglobulin Kt/V) and lower cardiac mortality (relative risk 0.93; 95% confidence interval 0.87 to 1.00) in the subgroup with >3.7 prestudy years of dialysis. Conclusions: These results generally support the notion that middle molecules are associated with systemic toxicity and that their accumulation predisposes dialysis patients to infectious deaths, independent of the duration of maintenance dialysis. PMID:18057309

  19. Identification of Intensity Ratio Break Points from Photon Arrival Trajectories in Ratiometric Single Molecule Spectroscopy

    PubMed Central

    Bingemann, Dieter; Allen, Rachel M.

    2012-01-01

    We describe a statistical method to analyze dual-channel photon arrival trajectories from single molecule spectroscopy model-free to identify break points in the intensity ratio. Photons are binned with a short bin size to calculate the logarithm of the intensity ratio for each bin. Stochastic photon counting noise leads to a near-normal distribution of this logarithm and the standard student t-test is used to find statistically significant changes in this quantity. In stochastic simulations we determine the significance threshold for the t-test’s p-value at a given level of confidence. We test the method’s sensitivity and accuracy indicating that the analysis reliably locates break points with significant changes in the intensity ratio with little or no error in realistic trajectories with large numbers of small change points, while still identifying a large fraction of the frequent break points with small intensity changes. Based on these results we present an approach to estimate confidence intervals for the identified break point locations and recommend a bin size to choose for the analysis. The method proves powerful and reliable in the analysis of simulated and actual data of single molecule reorientation in a glassy matrix. PMID:22837704

  20. School neighborhood disadvantage as a predictor of long-term sick leave among teachers: prospective cohort study.

    PubMed

    Virtanen, Marianna; Kivimäki, Mika; Pentti, Jaana; Oksanen, Tuula; Ahola, Kirsi; Linna, Anne; Kouvonen, Anne; Salo, Paula; Vahtera, Jussi

    2010-04-01

    This ongoing prospective study examined characteristics of school neighborhood and neighborhood of residence as predictors of sick leave among school teachers. School neighborhood income data for 226 lower-level comprehensive schools in 10 towns in Finland were derived from Statistics Finland and were linked to register-based data on 3,063 teachers with no long-term sick leave at study entry. Outcome was medically certified (>9 days) sick leave spells during a mean follow-up of 4.3 years from data collection in 2000-2001. A multilevel, cross-classified Poisson regression model, adjusted for age, type of teaching job, length and type of job contract, school size, baseline health status, and income level of the teacher's residential area, showed a rate ratio of 1.30 (95% confidence interval: 1.03, 1.63) for sick leave among female teachers working in schools located in low-income neighborhoods compared with those working in high-income neighborhoods. A low income level of the teacher's residential area was also independently associated with sick leave among female teachers (rate ratio = 1.50, 95% confidence interval: 1.18, 1.91). Exposure to both low-income school neighborhoods and low-income residential neighborhoods was associated with the greatest risk of sick leave (rate ratio = 1.71, 95% confidence interval: 1.27, 2.30). This study indicates that working and living in a socioeconomically disadvantaged neighborhood is associated with increased risk of sick leave among female teachers.

  1. [Effects of Kangaroo Care on anxiety, maternal role confidence, and maternal infant attachment of mothers who delivered preterm infants].

    PubMed

    Lee, Sang Bok; Shin, Hye Sook

    2007-10-01

    The purpose of this study was to examine the effects of Kangaroo Care(KC) on anxiety, maternal role confidence, and maternal infant attachment of mothers who delivered preterm infants. The research design was a nonequivalent control group pretest-posttest. Data was collected from September 1. 2006 to June 20. 2007. The participants were 22 mothers in the experimental group and 21 in the control group. KC was applied three times per day, for a total of ten times in 4 days to the experimental group. The degree of anxiety was statistically significantly different between the two groups but maternal role confidence and maternal infant attachment was statistically insignificant. This data suggests that KC was effective for mothers anxiety relief but it was not effective for maternal role confidence and maternal infant attachment of mothers. The implications for nursing practice and directions for future research need to be discussed.

  2. Investigation of self-compassion, self-confidence and submissive behaviors of nursing students studying in different curriculums.

    PubMed

    Eraydın, Şahizer; Karagözoğlu, Şerife

    2017-07-01

    Today, nursing education which educates the future members of the nursing profession aims to gain them high self-esteem, selfconfidence and self-compassion, independence, assertiveness and ability to establish good human relations. This aim can only be achieved through a contemporary curriculum supporting students in the educational process and enabling those in charge to make arrangements by taking the characters and needs of each individual into account. The study aims to investigate self-compassion, self-confidence and submissive behaviours of undergraduate nursing students studying in different curriculums. This descriptive, cross-sectional, comparative study was carried out with the 1st- and 4th-year students of the three schools, each of which has a different curriculum: conventional, integrated and Problem Based Learning (PBL). The study data were collected with the Self-Compassion Scale (SCS), Self-Confidence Scale (CS) and Submissive Acts Scale (SAS): The data were analyzed through frequency distribution, means, analysis of variance and the significance test for the difference between the two means. The mean scores the participating students obtained from the Self-Compassion, Self-confidence and Submissive Acts Scales were 3.31±0.56, 131.98±20.85 and 36.48±11.43 respectively. The integrated program students' mean self-compassion and self-confidence scores were statistically significantly higher and their mean submissive behaviour scores were lower than were those of the students studying in the other two programs (p<0.05). The analysis of the correlation between the mean scores obtained from the scales revealed that there was a statistically significant relationships between the SCS and CS values (r=0.388, p<0.001), between the SCS and SAS values (r=-0307, p<0.001) and between the CS and SAS values (r=-0325, p<0.001). In line with the study results, it can be said that the participating nursing students tended to display moderate levels of selfcompassion, self-confidence and submissive behaviours, and that the selfcompassion and self-confidence scores of the 4th-year students in the integrated program were higher than were those of the students in the other two programs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Nonparametric estimation of benchmark doses in environmental risk assessment

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  4. Limitations of diagnostic precision and predictive utility in the individual case: a challenge for forensic practice.

    PubMed

    Cooke, David J; Michie, Christine

    2010-08-01

    Knowledge of group tendencies may not assist accurate predictions in the individual case. This has importance for forensic decision making and for the assessment tools routinely applied in forensic evaluations. In this article, we applied Monte Carlo methods to examine diagnostic agreement with different levels of inter-rater agreement given the distributional characteristics of PCL-R scores. Diagnostic agreement and score agreement were substantially less than expected. In addition, we examined the confidence intervals associated with individual predictions of violent recidivism. On the basis of empirical findings, statistical theory, and logic, we conclude that predictions of future offending cannot be achieved in the individual case with any degree of confidence. We discuss the problems identified in relation to the PCL-R in terms of the broader relevance to all instruments used in forensic decision making.

  5. Investigating Mathematics Teachers' Thoughts of Statistical Inference

    ERIC Educational Resources Information Center

    Yang, Kai-Lin

    2012-01-01

    Research on statistical cognition and application suggests that statistical inference concepts are commonly misunderstood by students and even misinterpreted by researchers. Although some research has been done on students' misunderstanding or misconceptions of confidence intervals (CIs), few studies explore either students' or mathematics…

  6. Enhancing residents’ neonatal resuscitation competency through unannounced simulation-based training

    PubMed Central

    Surcouf, Jeffrey W.; Chauvin, Sheila W.; Ferry, Jenelle; Yang, Tong; Barkemeyer, Brian

    2013-01-01

    Background Almost half of pediatric third-year residents surveyed in 2000 had never led a resuscitation event. With increasing restrictions on residency work hours and a decline in patient volume in some hospitals, there is potential for fewer opportunities. Purpose Our primary purpose was to test the hypothesis that an unannounced mock resuscitation in a high-fidelity in-situ simulation training program would improve both residents’ self-confidence and observed performance of adopted best practices in neonatal resuscitation. Methods Each pediatric and medicine–pediatric resident in one pediatric residency program responded to an unannounced scenario that required resuscitation of the high fidelity infant simulator. Structured debriefing followed in the same setting, and a second cycle of scenario response and debriefing occurred before ending the 1-hour training experience. Measures included pre- and post-program confidence questionnaires and trained observer assessments of live and videotaped performances. Results Statistically significant pre–post gains for self-confidence were observed for 8 of the 14 NRP critical behaviors (p=0.00–0.03) reflecting knowledge, technical, and non-technical (teamwork) skills. The pre–post gain in overall confidence score was statistically significant (p=0.00). With a maximum possible assessment score of 41, the average pre–post gain was 8.28 and statistically significant (p<0.001). Results of the video-based assessments revealed statistically significant performance gains (p<0.0001). Correlation between live and video-based assessments were strong for pre–post training scenario performances (pre: r=0.64, p<0.0001; post: r=0.75, p<0.0001). Conclusions Results revealed high receptivity to in-situ, simulation-based training and significant positive gains in confidence and observed competency-related abilities. Results support the potential for other applications in residency and continuing education. PMID:23522399

  7. Fourier and Wavelet Analysis of Coronal Time Series

    NASA Astrophysics Data System (ADS)

    Auchère, F.; Froment, C.; Bocchialini, K.; Buchlin, E.; Solomon, J.

    2016-10-01

    Using Fourier and wavelet analysis, we critically re-assess the significance of our detection of periodic pulsations in coronal loops. We show that the proper identification of the frequency dependence and statistical properties of the different components of the power spectra provies a strong argument against the common practice of data detrending, which tends to produce spurious detections around the cut-off frequency of the filter. In addition, the white and red noise models built into the widely used wavelet code of Torrence & Compo cannot, in most cases, adequately represent the power spectra of coronal time series, thus also possibly causing false positives. Both effects suggest that several reports of periodic phenomena should be re-examined. The Torrence & Compo code nonetheless effectively computes rigorous confidence levels if provided with pertinent models of mean power spectra, and we describe the appropriate manner in which to call its core routines. We recall the meaning of the default confidence levels output from the code, and we propose new Monte-Carlo-derived levels that take into account the total number of degrees of freedom in the wavelet spectra. These improvements allow us to confirm that the power peaks that we detected have a very low probability of being caused by noise.

  8. Open Tibia Shaft Fractures and Soft-Tissue Coverage: The Effects of Management by an Orthopaedic Microsurgical Team.

    PubMed

    VandenBerg, James; Osei, Daniel; Boyer, Martin I; Gardner, Michael J; Ricci, William M; Spraggs-Hughes, Amanda; McAndrew, Christopher M

    2017-06-01

    To compare the timing of soft-tissue (flap) coverage and occurrence of complications before and after the establishment of an integrated orthopaedic trauma/microsurgical team. Retrospective cohort study. A single level 1 trauma center. Twenty-eight subjects (13 pre- and 15 post-integration) with open tibia shaft fractures (OTA/AO 42A, 42B, and 42C) treated with flap coverage between January 2009 and March 2015. Flap coverage for open tibia shaft fractures treated before ("preintegration") and after ("postintegration") implementation of an integrated orthopaedic trauma/microsurgical team. Time from index injury to flap coverage. The unadjusted median time to coverage was 7 days (95% confidence interval, 5.9-8.1) preintegration, and 6 days (95% confidence interval, 4.6-7.4) postintegration (P = 0.48). For preintegration, 9 (69%) of the patients experienced complications, compared with 7 (47%) postintegration (P = 0.23). After formation of an integrated orthopaedic trauma/microsurgery team, we observed a 1-day decrease in median days to coverage from index injury. Complications overall were lowered in the postintegration group, although statistically insignificant. Therapeutic Level III. See Instructions for Authors for a complete description of levels of evidence.

  9. Motivators for children with severe intellectual disabilities in the self-contained classroom: a movement analysis.

    PubMed

    DeBedout, Jennifer K; Worden, Melissa C

    2006-01-01

    The purpose of this study was to analyze the impact of the presence of a music therapist versus the use of switch-activated toys and recorded music in evoking physiological, affective, and vocal responses in school-aged children who are considered SID, or Severely Intellectually Disabled. Researchers studied movement responses to determine how these children might respond more readily to a music therapist interacting musically with them than to the toys and to the recorded music. Each of the 17 children participated in a videotaped session that included 5 trials. Other than the rest trial, motivators presented were: a switch-activated pig toy, switch-activated recorded music, the music therapist playing and singing to the child contingent upon his touching the guitar, and the therapist playing and singing to the child continually. A data collection sheet was developed by the researchers to assess positive responses by measuring limb and head movement, vocal sound, and facial expression change. The Kruskal-Wallis test was used to determine differences in means responses to music and nonmusic trials. At a 90% level of confidence, statistical significance was attained in comparing the responses to the pig toy, recorded music, and "activated guitar" trial to the trial during which the therapist played guitar and sang continually to the child. In addition, at a 95% level of confidence, statistically significant differences were found in comparing the pig and the recorded music trials with the therapist trial.

  10. Analysis of Vibrio vulnificus Infection Risk When Consuming Depurated Raw Oysters.

    PubMed

    Deng, Kai; Wu, Xulei; Fuentes, Claudio; Su, Yi-Cheng; Welti-Chanes, Jorge; Paredes-Sabja, Daniel; Torres, J Antonio

    2015-06-01

    A beta Poisson dose-response model for Vibrio vulnificus food poisoning cases leading to septicemia was used to evaluate the effect of depuration at 15 °C on the estimated health risk associated with raw oyster consumption. Statistical variability sources included V. vulnificus level at harvest, time and temperature during harvest and transportation to processing plants, decimal reductions (SV) observed during experimental circulation depuration treatments, refrigerated storage time before consumption, oyster size, and number of oysters per consumption event. Although reaching nondetectable V. vulnificus levels (<30 most probable number per gram) throughout the year and a 3.52 SV were estimated not possible at the 95% confidence level, depuration for 1, 2, 3, and 4 days would reduce the warm season (June through September) risk from 2,669 cases to 558, 93, 38, and 47 cases per 100 million consumption events, respectively. At the 95% confidence level, 47 and 16 h of depuration would reduce the warm and transition season (April through May and October through November) risk, respectively, to 100 cases per 100 million consumption events, which is assumed to be an acceptable risk; 1 case per 100 million events would be the risk when consuming untreated raw oysters in the cold season (December through March).

  11. The 2012 Retirement Confidence Survey: job insecurity, debt weigh on retirement confidence, savings.

    PubMed

    Helman, Ruth; Copeland, Craig; VanDerhei, Jack

    2012-03-01

    Americans' confidence in their ability to retire comfortably is stagnant at historically low levels. Just 14 percent are very confident they will have enough money to live comfortably in retirement (statistically equivalent to the low of 13 percent measured in 2011 and 2009). Employment insecurity looms large: Forty-two percent identify job uncertainty as the most pressing financial issue facing most Americans today. Worker confidence about having enough money to pay for medical expenses and long-term care expenses in retirement remains well below their confidence levels for paying basic expenses. Many workers report they have virtually no savings and investments. In total, 60 percent of workers report that the total value of their household's savings and investments, excluding the value of their primary home and any defined benefit plans, is less than $25,000. Twenty-five percent of workers in the 2012 Retirement Confidence Survey say the age at which they expect to retire has changed in the past year. In 1991, 11 percent of workers said they expected to retire after age 65, and by 2012 that has grown to 37 percent. Regardless of those retirement age expectations, and consistent with prior RCS findings, half of current retirees surveyed say they left the work force unexpectedly due to health problems, disability, or changes at their employer, such as downsizing or closure. Those already in retirement tend to express higher levels of confidence than current workers about several key financial aspects of retirement. Retirees report they are significantly more reliant on Social Security as a major source of their retirement income than current workers expect to be. Although 56 percent of workers expect to receive benefits from a defined benefit plan in retirement, only 33 percent report that they and/or their spouse currently have such a benefit with a current or previous employer. More than half of workers (56 percent) report they and/or their spouse have not tried to calculate how much money they will need to have saved by the time they retire so that they can live comfortably in retirement. Only a minority of workers and retirees feel very comfortable using online technologies to perform various tasks related to financial management. Relatively few use mobile devices such as a smart phone or tablet to manage their finances, and just 10 percent say they are comfortable obtaining advice from financial professionals online.

  12. Police and Suicide Prevention.

    PubMed

    Marzano, Lisa; Smith, Mark; Long, Matthew; Kisby, Charlotte; Hawton, Keith

    2016-05-01

    Police officers are frequently the first responders to individuals in crisis, but generally receive little training for this role. We developed and evaluated training in suicide awareness and prevention for frontline rail police in the UK. To investigate the impact of training on officers' suicide prevention attitudes, confidence, and knowledge. Fifty-three participants completed a brief questionnaire before and after undertaking training. In addition, two focus groups were conducted with 10 officers to explore in greater depth their views and experiences of the training program and the perceived impact on practice. Baseline levels of suicide prevention attitudes, confidence, and knowledge were mixed but mostly positive and improved significantly after training. Such improvements were seemingly maintained over time, but there was insufficient power to test this statistically. Feedback on the course was generally excellent, notwithstanding some criticisms and suggestions for improvement. Training in suicide prevention appears to have been well received and to have had a beneficial impact on officers' attitudes, confidence, and knowledge. Further research is needed to assess its longer-term effects on police attitudes, skills, and interactions with suicidal individuals, and to establish its relative effectiveness in the context of multilevel interventions.

  13. Subseasonal climate variability for North Carolina, United States

    NASA Astrophysics Data System (ADS)

    Sayemuzzaman, Mohammad; Jha, Manoj K.; Mekonnen, Ademe; Schimmel, Keith A.

    2014-08-01

    Subseasonal trends in climate variability for maximum temperature (Tmax), minimum temperature (Tmin) and precipitation were evaluated for 249 ground-based stations in North Carolina for 1950-2009. The magnitude and significance of the trends at all stations were determined using the non-parametric Theil-Sen Approach (TSA) and the Mann-Kendall (MK) test, respectively. The Sequential Mann-Kendall (SQMK) test was also applied to find the initiation of abrupt trend changes. The lag-1 serial correlation and double mass curve were employed to address the data independency and homogeneity. Using the MK trend test, statistically significant (confidence level ≥ 95% in two-tailed test) decreasing (increasing) trends by 44% (45%) of stations were found in May (June). In general, trends were decreased in Tmax and increased in Tmin data series in subseasonal scale. Using the TSA method, the magnitude of lowest (highest) decreasing (increasing) trend in Tmax is - 0.050 °C/year (+ 0.052 °C/year) in the monthly series for May (March) and for Tmin is - 0.055 °C/year (+ 0.075 °C/year) in February (December). For the precipitation time series using the TSA method, it was found that the highest (lowest) magnitude of 1.00 mm/year (- 1.20 mm/year) is in September (February). The overall trends in precipitation data series were not significant at the 95% confidence level except that 17% of stations were found to have significant (confidence level ≥ 95% in two-tailed test) decreasing trends in February. The statistically significant trend test results were used to develop a spatial distribution of trends: May for Tmax, June for Tmin, and February for precipitation. A correlative analysis of significant temperature and precipitation trend results was examined with respect to large scale circulation modes (North Atlantic Oscillation (NAO) and Southern Oscillation Index (SOI). A negative NAO index (positive-El Niño Southern Oscillation (ENSO) index) was found to be associated with the decreasing precipitation in February during 1960-1980 (2000-2009). The incremental trend in Tmin in the inter-seasonal (April-October) time scale can be associated with the positive NAO index during 1970-2000.

  14. Uncovering robust patterns of microRNA co-expression across cancers using Bayesian Relevance Networks

    PubMed Central

    2017-01-01

    Co-expression networks have long been used as a tool for investigating the molecular circuitry governing biological systems. However, most algorithms for constructing co-expression networks were developed in the microarray era, before high-throughput sequencing—with its unique statistical properties—became the norm for expression measurement. Here we develop Bayesian Relevance Networks, an algorithm that uses Bayesian reasoning about expression levels to account for the differing levels of uncertainty in expression measurements between highly- and lowly-expressed entities, and between samples with different sequencing depths. It combines data from groups of samples (e.g., replicates) to estimate group expression levels and confidence ranges. It then computes uncertainty-moderated estimates of cross-group correlations between entities, and uses permutation testing to assess their statistical significance. Using large scale miRNA data from The Cancer Genome Atlas, we show that our Bayesian update of the classical Relevance Networks algorithm provides improved reproducibility in co-expression estimates and lower false discovery rates in the resulting co-expression networks. Software is available at www.perkinslab.ca. PMID:28817636

  15. Uncovering robust patterns of microRNA co-expression across cancers using Bayesian Relevance Networks.

    PubMed

    Ramachandran, Parameswaran; Sánchez-Taltavull, Daniel; Perkins, Theodore J

    2017-01-01

    Co-expression networks have long been used as a tool for investigating the molecular circuitry governing biological systems. However, most algorithms for constructing co-expression networks were developed in the microarray era, before high-throughput sequencing-with its unique statistical properties-became the norm for expression measurement. Here we develop Bayesian Relevance Networks, an algorithm that uses Bayesian reasoning about expression levels to account for the differing levels of uncertainty in expression measurements between highly- and lowly-expressed entities, and between samples with different sequencing depths. It combines data from groups of samples (e.g., replicates) to estimate group expression levels and confidence ranges. It then computes uncertainty-moderated estimates of cross-group correlations between entities, and uses permutation testing to assess their statistical significance. Using large scale miRNA data from The Cancer Genome Atlas, we show that our Bayesian update of the classical Relevance Networks algorithm provides improved reproducibility in co-expression estimates and lower false discovery rates in the resulting co-expression networks. Software is available at www.perkinslab.ca.

  16. Gender differences in learning physical science concepts: Does computer animation help equalize them?

    NASA Astrophysics Data System (ADS)

    Jacek, Laura Lee

    This dissertation details an experiment designed to identify gender differences in learning using three experimental treatments: animation, static graphics, and verbal instruction alone. Three learning presentations were used in testing of 332 university students. Statistical analysis was performed using ANOVA, binomial tests for differences of proportion, and descriptive statistics. Results showed that animation significantly improved women's long-term learning over static graphics (p = 0.067), but didn't significantly improve men's long-term learning over static graphics. In all cases, women's scores improved with animation over both other forms of instruction for long-term testing, indicating that future research should not abandon the study of animation as a tool that may promote gender equity in science. Short-term test differences were smaller, and not statistically significant. Variation present in short-term scores was related more to presentation topic than treatment. This research also details characteristics of each of the three presentations, to identify variables (e.g. level of abstraction in presentation) affecting score differences within treatments. Differences between men's and women's scores were non-standard between presentations, but these differences were not statistically significant (long-term p = 0.2961, short-term p = 0.2893). In future research, experiments might be better designed to test these presentational variables in isolation, possibly yielding more distinctive differences between presentational scores. Differences in confidence interval overlaps between presentations suggested that treatment superiority may be somewhat dependent on the design or topic of the learning presentation. Confidence intervals greatly overlap in all situations. This undercut, to some degree, the surety of conclusions indicating superiority of one treatment type over the others. However, confidence intervals for animation were smaller, overlapped nearly completely for men and women (there was less overlap between the genders for the other two treatments), and centered around slightly higher means, lending further support to the conclusion that animation helped equalize men's and women's learning. The most important conclusion identified in this research is that gender is an important variable experimental populations testing animation as a learning device. Averages indicated that both men and women prefer to work with animation over either static graphics or verbal instruction alone.

  17. Descriptive Statistics: Reporting the Answers to the 5 Basic Questions of Who, What, Why, When, Where, and a Sixth, So What?

    PubMed

    Vetter, Thomas R

    2017-11-01

    Descriptive statistics are specific methods basically used to calculate, describe, and summarize collected research data in a logical, meaningful, and efficient way. Descriptive statistics are reported numerically in the manuscript text and/or in its tables, or graphically in its figures. This basic statistical tutorial discusses a series of fundamental concepts about descriptive statistics and their reporting. The mean, median, and mode are 3 measures of the center or central tendency of a set of data. In addition to a measure of its central tendency (mean, median, or mode), another important characteristic of a research data set is its variability or dispersion (ie, spread). In simplest terms, variability is how much the individual recorded scores or observed values differ from one another. The range, standard deviation, and interquartile range are 3 measures of variability or dispersion. The standard deviation is typically reported for a mean, and the interquartile range for a median. Testing for statistical significance, along with calculating the observed treatment effect (or the strength of the association between an exposure and an outcome), and generating a corresponding confidence interval are 3 tools commonly used by researchers (and their collaborating biostatistician or epidemiologist) to validly make inferences and more generalized conclusions from their collected data and descriptive statistics. A number of journals, including Anesthesia & Analgesia, strongly encourage or require the reporting of pertinent confidence intervals. A confidence interval can be calculated for virtually any variable or outcome measure in an experimental, quasi-experimental, or observational research study design. Generally speaking, in a clinical trial, the confidence interval is the range of values within which the true treatment effect in the population likely resides. In an observational study, the confidence interval is the range of values within which the true strength of the association between the exposure and the outcome (eg, the risk ratio or odds ratio) in the population likely resides. There are many possible ways to graphically display or illustrate different types of data. While there is often latitude as to the choice of format, ultimately, the simplest and most comprehensible format is preferred. Common examples include a histogram, bar chart, line chart or line graph, pie chart, scatterplot, and box-and-whisker plot. Valid and reliable descriptive statistics can answer basic yet important questions about a research data set, namely: "Who, What, Why, When, Where, How, How Much?"

  18. Correlation between Glycated Hemoglobin and Triglyceride Level in Type 2 Diabetes Mellitus.

    PubMed

    Naqvi, Syeda; Naveed, Shabnam; Ali, Zeeshan; Ahmad, Syed Masroor; Asadullah Khan, Raad; Raj, Honey; Shariff, Shoaib; Rupareliya, Chintan; Zahra, Fatima; Khan, Saba

    2017-06-13

    Dyslipidemia is quite prevalent in non-insulin dependent diabetes mellitus. Maintaining tight glycemic along with lipid control plays an essential role in preventing micro- and macro-vascular complications associated with diabetes. The main purpose of the study was to highlight the relationship between glycosylated hemoglobin (HbA1c) and triglyceride levels. This may in turn help in predicting the triglyceride status of type 2 diabetics and therefore identifying patients at increased risk from cardiovascular events. Hypertriglyceridemia is one of the common risk factors for coronary artery disease in type 2 diabetes mellitus (DM). Careful monitoring of the blood glucose level can be used to predict lipid status and can prevent most of the complications associated with the disease. This is a cross-sectional study using data collected from the outpatient diabetic clinic of Jinnah Postgraduate Medical Centre (JPMC) Karachi, Pakistan. Patients of age 18 years and above were recruited from the clinic. A total of consenting 509 patients of type 2 diabetes mellitus were enrolled over a period of 11 months.  For statistical analysis, SPSS Statistics for Windows, Version 17.0 ( IBM Corp, Armonk, New York) was used and Chi-square and Pearson's correlation coefficient was used to find the association between triglyceride and HbA1c. The HbA1c was dichotomized into four groups on the basis of cut-off. Chi-square was used for association between HbA1c with various cut-off values and high triglyceride levels. Odds-ratio and its 95% confidence interval were calculated to estimate the level of risk between high triglyceride levels and HbA1c groups. The p-value < 0.05 was considered statistically significant for all the tests applied for significance. The association of high triglyceride was evaluated in four different groups of HbA1c, with a cut-off seven, eight, nine and 10 respectively. With HbA1c cut-off value of 7%, 74% patients had high triglycerides and showed a significant association with high triglyceride levels at p < 0.001 and odds ratio was 2.038 (95% confidence interval: 1.397 - 2.972). Logistic regression models were adjusted for demographic factors (age, race, gender), lifestyle factors (smoking, body mass index, lifestyle) and health status factors (blood pressure, physician-rated health status). After adjusting for relevant covariates, glycated hemoglobin was positively correlated with high triglyceride. Hence, HbA1c can be an indicator of triglyceride level and can be one of the predictors of cardiovascular risk factors in type 2 diabetes mellitus.

  19. A Statistics-based Platform for Quantitative N-terminome Analysis and Identification of Protease Cleavage Products*

    PubMed Central

    auf dem Keller, Ulrich; Prudova, Anna; Gioia, Magda; Butler, Georgina S.; Overall, Christopher M.

    2010-01-01

    Terminal amine isotopic labeling of substrates (TAILS), our recently introduced platform for quantitative N-terminome analysis, enables wide dynamic range identification of original mature protein N-termini and protease cleavage products. Modifying TAILS by use of isobaric tag for relative and absolute quantification (iTRAQ)-like labels for quantification together with a robust statistical classifier derived from experimental protease cleavage data, we report reliable and statistically valid identification of proteolytic events in complex biological systems in MS2 mode. The statistical classifier is supported by a novel parameter evaluating ion intensity-dependent quantification confidences of single peptide quantifications, the quantification confidence factor (QCF). Furthermore, the isoform assignment score (IAS) is introduced, a new scoring system for the evaluation of single peptide-to-protein assignments based on high confidence protein identifications in the same sample prior to negative selection enrichment of N-terminal peptides. By these approaches, we identified and validated, in addition to known substrates, low abundance novel bioactive MMP-2 targets including the plasminogen receptor S100A10 (p11) and the proinflammatory cytokine proEMAP/p43 that were previously undescribed. PMID:20305283

  20. Heritability of and mortality prediction with a longevity phenotype: the healthy aging index.

    PubMed

    Sanders, Jason L; Minster, Ryan L; Barmada, M Michael; Matteini, Amy M; Boudreau, Robert M; Christensen, Kaare; Mayeux, Richard; Borecki, Ingrid B; Zhang, Qunyuan; Perls, Thomas; Newman, Anne B

    2014-04-01

    Longevity-associated genes may modulate risk for age-related diseases and survival. The Healthy Aging Index (HAI) may be a subphenotype of longevity, which can be constructed in many studies for genetic analysis. We investigated the HAI's association with survival in the Cardiovascular Health Study and heritability in the Long Life Family Study. The HAI includes systolic blood pressure, pulmonary vital capacity, creatinine, fasting glucose, and Modified Mini-Mental Status Examination score, each scored 0, 1, or 2 using approximate tertiles and summed from 0 (healthy) to 10 (unhealthy). In Cardiovascular Health Study, the association with mortality and accuracy predicting death were determined with Cox proportional hazards analysis and c-statistics, respectively. In Long Life Family Study, heritability was determined with a variance component-based family analysis using a polygenic model. Cardiovascular Health Study participants with unhealthier index scores (7-10) had 2.62-fold (95% confidence interval: 2.22, 3.10) greater mortality than participants with healthier scores (0-2). The HAI alone predicted death moderately well (c-statistic = 0.643, 95% confidence interval: 0.626, 0.661, p < .0001) and slightly worse than age alone (c-statistic = 0.700, 95% confidence interval: 0.684, 0.717, p < .0001; p < .0001 for comparison of c-statistics). Prediction increased significantly with adjustment for demographics, health behaviors, and clinical comorbidities (c-statistic = 0.780, 95% confidence interval: 0.765, 0.794, p < .0001). In Long Life Family Study, the heritability of the HAI was 0.295 (p < .0001) overall, 0.387 (p < .0001) in probands, and 0.238 (p = .0004) in offspring. The HAI should be investigated further as a candidate phenotype for uncovering longevity-associated genes in humans.

  1. Heritability of and Mortality Prediction With a Longevity Phenotype: The Healthy Aging Index

    PubMed Central

    2014-01-01

    Background. Longevity-associated genes may modulate risk for age-related diseases and survival. The Healthy Aging Index (HAI) may be a subphenotype of longevity, which can be constructed in many studies for genetic analysis. We investigated the HAI’s association with survival in the Cardiovascular Health Study and heritability in the Long Life Family Study. Methods. The HAI includes systolic blood pressure, pulmonary vital capacity, creatinine, fasting glucose, and Modified Mini-Mental Status Examination score, each scored 0, 1, or 2 using approximate tertiles and summed from 0 (healthy) to 10 (unhealthy). In Cardiovascular Health Study, the association with mortality and accuracy predicting death were determined with Cox proportional hazards analysis and c-statistics, respectively. In Long Life Family Study, heritability was determined with a variance component–based family analysis using a polygenic model. Results. Cardiovascular Health Study participants with unhealthier index scores (7–10) had 2.62-fold (95% confidence interval: 2.22, 3.10) greater mortality than participants with healthier scores (0–2). The HAI alone predicted death moderately well (c-statistic = 0.643, 95% confidence interval: 0.626, 0.661, p < .0001) and slightly worse than age alone (c-statistic = 0.700, 95% confidence interval: 0.684, 0.717, p < .0001; p < .0001 for comparison of c-statistics). Prediction increased significantly with adjustment for demographics, health behaviors, and clinical comorbidities (c-statistic = 0.780, 95% confidence interval: 0.765, 0.794, p < .0001). In Long Life Family Study, the heritability of the HAI was 0.295 (p < .0001) overall, 0.387 (p < .0001) in probands, and 0.238 (p = .0004) in offspring. Conclusion. The HAI should be investigated further as a candidate phenotype for uncovering longevity-associated genes in humans. PMID:23913930

  2. DIDA - Dynamic Image Disparity Analysis.

    DTIC Science & Technology

    1982-12-31

    register the image only where the disparity estimates are believed to be correct. Therefore, in our 60 implementation we register in proportion to the...average motion is computed as a the average of neighbors motions weighted by their confidence. Since estimates contribute oniy in proportion to their...confidence statistics in the same proportion as they contribute to the average disparity estimate. Two confidences are derived from the weighted

  3. Developing Confidence Limits For Reliability Of Software

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1991-01-01

    Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.

  4. Statistics Using Just One Formula

    ERIC Educational Resources Information Center

    Rosenthal, Jeffrey S.

    2018-01-01

    This article advocates that introductory statistics be taught by basing all calculations on a single simple margin-of-error formula and deriving all of the standard introductory statistical concepts (confidence intervals, significance tests, comparisons of means and proportions, etc) from that one formula. It is argued that this approach will…

  5. Optimized lower leg injury probability curves from postmortem human subject tests under axial impacts.

    PubMed

    Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A; Szabo, Aniko

    2014-01-01

    Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. The study reexamined lower leg postmortem human subjects (PMHS) data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and noninjury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the covariable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal, and log-logistic distributions was based on the Akaike information criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. The mean age, stature, and weight were 58.2±15.1 years, 1.74±0.08 m, and 74.9±13.8 kg, respectively. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other 2 distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-olds at 5, 25, and 50% risk levels age groups for lower leg fracture. For 25, 45, and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines.

  6. Mathematics Anxiety and Preservice Elementary Teachers' Confidence to Teach Mathematics and Science

    ERIC Educational Resources Information Center

    Bursal, Murat; Paznokas, Lynda

    2006-01-01

    Sixty-five preservice elementary teachers' math anxiety levels and confidence levels to teach elementary mathematics and science were measured. The confidence scores of subjects in different math anxiety groups were compared and the relationships between their math anxiety levels and confidence levels to teach mathematics and science were…

  7. Search for Neutrinoless Double-Beta Decay with the Upgraded EXO-200 Detector

    DOE PAGES

    Albert, J. B.; Anton, G.; Badhrees, I.; ...

    2018-02-15

    Results from a search for neutrinoless double-beta decay (0νββ) of 136Xe are presented using the first year of data taken with the upgraded EXO-200 detector. Relative to previous searches by EXO-200, the energy resolution of the detector has been improved to σ/E = 1.23 % , the electric field in the drift region has been raised by 50%, and a system to suppress radon in the volume between the cryostat and lead shielding has been implemented. In addition, analysis techniques that improve topological discrimination between 0νββ and background events have been developed. Incorporating these hardware and analysis improvements, the medianmore » 90% confidence level 0νββ half-life sensitivity after combining with the full data set acquired before the upgrade has increased twofold to 3.7 × 10 25 yr . Finally, no statistically significant evidence for 0νββ is observed, leading to a lower limit on the 0νββ half-life of 1.8 × 10 25 yr at the 90% confidence level.« less

  8. Improving medication practices for persons with intellectual and developmental disability: Educating direct support staff using simulation, debriefing, and reflection.

    PubMed

    Auberry, Kathy; Wills, Katherine; Shaver, Carrie

    2017-01-01

    Direct support professionals (DSPs) are increasingly active in medication administration for people with intellectual and developmental disabilities, thus supplementing nursing and family caretakers. Providing workplace training for DSPs is often the duty of nursing personnel. This article presents empirical data and design suggestions for including simulations, debriefing, and written reflective practice during in-service training for DSPs in order to improve DSPs' skills and confidence related to medication administration. Quantitative study results demonstrate that DSPs acknowledge that their skill level and confidence rose significantly after hands-on simulations. The skill-level effect was statistically significant for general medication management -4.5 ( p < 0.001) and gastrointestinal medication management -4.4 ( p < 0.001). Qualitative findings show a deep desire by DSPs to not just be "pill poppers" but to understand the medical processes, causalities, and consequences of their medication administration. On the basis of our results, the authors make recommendations regarding how to combine DSP workplace simulations and debriefing with written reflective practice in DSP continuing education.

  9. Search for Neutrinoless Double-Beta Decay with the Upgraded EXO-200 Detector

    NASA Astrophysics Data System (ADS)

    Albert, J. B.; Anton, G.; Badhrees, I.; Barbeau, P. S.; Bayerlein, R.; Beck, D.; Belov, V.; Breidenbach, M.; Brunner, T.; Cao, G. F.; Cen, W. R.; Chambers, C.; Cleveland, B.; Coon, M.; Craycraft, A.; Cree, W.; Daniels, T.; Danilov, M.; Daugherty, S. J.; Daughhetee, J.; Davis, J.; Delaquis, S.; Der Mesrobian-Kabakian, A.; DeVoe, R.; Didberidze, T.; Dilling, J.; Dolgolenko, A.; Dolinski, M. J.; Fairbank, W.; Farine, J.; Feyzbakhsh, S.; Fierlinger, P.; Fudenberg, D.; Gornea, R.; Graham, K.; Gratta, G.; Hall, C.; Hansen, E. V.; Hoessl, J.; Hufschmidt, P.; Hughes, M.; Jamil, A.; Jewell, M. J.; Johnson, A.; Johnston, S.; Karelin, A.; Kaufman, L. J.; Koffas, T.; Kravitz, S.; Krücken, R.; Kuchenkov, A.; Kumar, K. S.; Lan, Y.; Leonard, D. S.; Li, G. S.; Li, S.; Licciardi, C.; Lin, Y. H.; MacLellan, R.; Michel, T.; Mong, B.; Moore, D.; Murray, K.; Nelson, R.; Njoya, O.; Odian, A.; Ostrovskiy, I.; Piepke, A.; Pocar, A.; Retière, F.; Rowson, P. C.; Russell, J. J.; Schmidt, S.; Schubert, A.; Sinclair, D.; Stekhanov, V.; Tarka, M.; Tolba, T.; Tsang, R.; Vogel, P.; Vuilleumier, J.-L.; Wagenpfeil, M.; Waite, A.; Walton, T.; Weber, M.; Wen, L. J.; Wichoski, U.; Wrede, G.; Yang, L.; Yen, Y.-R.; Zeldovich, O. Ya.; Zettlemoyer, J.; Ziegler, T.; EXO-200 Collaboration

    2018-02-01

    Results from a search for neutrinoless double-beta decay (0 ν β β ) of 136Xe are presented using the first year of data taken with the upgraded EXO-200 detector. Relative to previous searches by EXO-200, the energy resolution of the detector has been improved to σ /E =1.23 % , the electric field in the drift region has been raised by 50%, and a system to suppress radon in the volume between the cryostat and lead shielding has been implemented. In addition, analysis techniques that improve topological discrimination between 0 ν β β and background events have been developed. Incorporating these hardware and analysis improvements, the median 90% confidence level 0 ν β β half-life sensitivity after combining with the full data set acquired before the upgrade has increased twofold to 3.7 ×1025 yr . No statistically significant evidence for 0 ν β β is observed, leading to a lower limit on the 0 ν β β half-life of 1.8 ×1025 yr at the 90% confidence level.

  10. Meta-analysis of Specific Music Therapy Measures and Their Implications for the Health Care System.

    PubMed

    Llovet, Aliza K

    The purpose of the activity reported in this article was to conduct an exhaustive search of the Journal of Music Therapy, filter the articles on the desired parameters, and organize data for analysis and interpretation. Specifically, the researcher studied whether (a) there was a significant difference in physiological measures, (b) there was a significant difference in quality of life, (c) there was a significant difference in satisfaction levels, (d) there was a significant difference in pain reduction, (e) there was a significant difference in procedural length, (f) there was a significant difference in length of stay, and (g) whether the overall effect size and 95% confidence level support the recommendation of music therapy in the health care setting. Twenty-four studies met criteria for inclusion in the systematic review. Results revealed an overall effect size of d = 0.61. However, results of the 95% confidence interval included 0, which suggests that there may not be a statistically significant difference in the health care setting on desired measures. Further results and implications are discussed within the article.

  11. Gender and Age Analyses of NIRS/STAI Pearson Correlation Coefficients at Resting State.

    PubMed

    Matsumoto, T; Fuchita, Y; Ichikawa, K; Fukuda, Y; Takemura, N; Sakatani, K

    2016-01-01

    According to the valence asymmetry hypothesis, the left/right asymmetry of PFC activity is correlated with specific emotional responses to mental stress and personality traits. In a previous study we measured spontaneous oscillation of oxy-Hb concentrations in the bilateral PFC at rest in normal adults employing two-channel portable NIRS and computed the laterality index at rest (LIR). We investigated the Pearson correlation coefficient between the LIR and anxiety levels evaluated by the State-Trait Anxiety Inventory (STAI) test. We found that subjects with right-dominant activity at rest showed higher STAI scores, while those with left dominant oxy-Hb changes at rest showed lower STAI scores such that the Pearson correlation coefficient between LIR and STAI was positive. This study performed Bootstrap analysis on the data and showed the following statistics of the target correlation coefficient: mean=0.4925 and lower confidence limit=0.177 with confidence level 0.05. Using the KS-test, we demonstrated that the correlation did not depend on age, whereas it did depend on gender.

  12. A Northern Sky Survey for Point-like Sources of EeV Neutral Particles with the Telescope Array Experiment

    NASA Astrophysics Data System (ADS)

    Abbasi, R. U.; Abe, M.; Abu-Zayyad, T.; Allen, M.; Anderson, R.; Azuma, R.; Barcikowski, E.; Belz, J. W.; Bergman, D. R.; Blake, S. A.; Cady, R.; Chae, M. J.; Cheon, B. G.; Chiba, J.; Chikawa, M.; Cho, W. R.; Fujii, T.; Fukushima, M.; Goto, T.; Hanlon, W.; Hayashi, Y.; Hayashida, N.; Hibino, K.; Honda, K.; Ikeda, D.; Inoue, N.; Ishii, T.; Ishimori, R.; Ito, H.; Ivanov, D.; Jui, C. C. H.; Kadota, K.; Kakimoto, F.; Kalashev, O.; Kasahara, K.; Kawai, H.; Kawakami, S.; Kawana, S.; Kawata, K.; Kido, E.; Kim, H. B.; Kim, J. H.; Kim, J. H.; Kitamura, S.; Kitamura, Y.; Kuzmin, V.; Kwon, Y. J.; Lan, J.; Lim, S. I.; Lundquist, J. P.; Machida, K.; Martens, K.; Matsuda, T.; Matsuyama, T.; Matthews, J. N.; Minamino, M.; Mukai, K.; Myers, I.; Nagasawa, K.; Nagataki, S.; Nakamura, T.; Nonaka, T.; Nozato, A.; Ogio, S.; Ogura, J.; Ohnishi, M.; Ohoka, H.; Oki, K.; Okuda, T.; Ono, M.; Oshima, A.; Ozawa, S.; Park, I. H.; Pshirkov, M. S.; Rodriguez, D. C.; Rubtsov, G.; Ryu, D.; Sagawa, H.; Sakurai, N.; Sampson, A. L.; Scott, L. M.; Shah, P. D.; Shibata, F.; Shibata, T.; Shimodaira, H.; Shin, B. K.; Smith, J. D.; Sokolsky, P.; Springer, R. W.; Stokes, B. T.; Stratton, S. R.; Stroman, T. A.; Suzawa, T.; Takamura, M.; Takeda, M.; Takeishi, R.; Taketa, A.; Takita, M.; Tameda, Y.; Tanaka, H.; Tanaka, K.; Tanaka, M.; Thomas, S. B.; Thomson, G. B.; Tinyakov, P.; Tkachev, I.; Tokuno, H.; Tomida, T.; Troitsky, S.; Tsunesada, Y.; Tsutsumi, K.; Uchihori, Y.; Udo, S.; Urban, F.; Vasiloff, G.; Wong, T.; Yamane, R.; Yamaoka, H.; Yamazaki, K.; Yang, J.; Yashiro, K.; Yoneda, Y.; Yoshida, S.; Yoshii, H.; Zollinger, R.; Zundel, Z.

    2015-05-01

    We report on the search for steady point-like sources of neutral particles around 1018 eV between 2008 and 2013 May with the scintillator SD of the Telescope Array experiment. We found overall no significant point-like excess above 0.5 EeV in the northern sky. Subsequently, we also searched for coincidence with the Fermi bright Galactic sources. No significant coincidence was found within the statistical uncertainty. Hence, we set an upper limit on the neutron flux that corresponds to an averaged flux of 0.07 km-2 yr-1 for E\\gt 1 EeV in the northern sky at the 95% confidence level. This is the most stringent flux upper limit in a northern sky survey assuming point-like sources. The upper limit at the 95% confidence level on the neutron flux from Cygnus X-3 is also set to 0.2 km-2 yr-1 for E\\gt 0.5 EeV. This is an order of magnitude lower than previous flux measurements.

  13. Search for Neutrinoless Double-Beta Decay with the Upgraded EXO-200 Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, J. B.; Anton, G.; Badhrees, I.

    Results from a search for neutrinoless double-beta decay (0νββ) of 136Xe are presented using the first year of data taken with the upgraded EXO-200 detector. Relative to previous searches by EXO-200, the energy resolution of the detector has been improved to σ/E = 1.23 % , the electric field in the drift region has been raised by 50%, and a system to suppress radon in the volume between the cryostat and lead shielding has been implemented. In addition, analysis techniques that improve topological discrimination between 0νββ and background events have been developed. Incorporating these hardware and analysis improvements, the medianmore » 90% confidence level 0νββ half-life sensitivity after combining with the full data set acquired before the upgrade has increased twofold to 3.7 × 10 25 yr . Finally, no statistically significant evidence for 0νββ is observed, leading to a lower limit on the 0νββ half-life of 1.8 × 10 25 yr at the 90% confidence level.« less

  14. How Statistics "Excel" Online.

    ERIC Educational Resources Information Center

    Chao, Faith; Davis, James

    2000-01-01

    Discusses the use of Microsoft Excel software and provides examples of its use in an online statistics course at Golden Gate University in the areas of randomness and probability, sampling distributions, confidence intervals, and regression analysis. (LRW)

  15. Statistical tests of peaks and periodicities in the observed redshift distribution of quasi-stellar objects

    NASA Astrophysics Data System (ADS)

    Duari, Debiprosad; Gupta, Patrick D.; Narlikar, Jayant V.

    1992-01-01

    An overview of statistical tests of peaks and periodicities in the redshift distribution of quasi-stellar objects is presented. The tests include the power-spectrum analysis carried out by Burbidge and O'Dell (1972), the generalized Rayleigh test, the Kolmogorov-Smirnov test, and the 'comb-tooth' test. The tests reveal moderate to strong evidence for periodicities of 0.0565 and 0.0127-0.0129. The confidence level of the periodicity of 0.0565 in fact marginally increases when redshifts are transformed to the Galactocentric frame. The same periodicity, first noticed in 1968, persists to date with a QSO population that has since grown about 30 times its original size. The prima facie evidence for periodicities in 1n(1 + z) is found to be of no great significance.

  16. Factors influencing students' perceptions of their quantitative skills

    NASA Astrophysics Data System (ADS)

    Matthews, Kelly E.; Hodgson, Yvonne; Varsavsky, Cristina

    2013-09-01

    There is international agreement that quantitative skills (QS) are an essential graduate competence in science. QS refer to the application of mathematical and statistical thinking and reasoning in science. This study reports on the use of the Science Students Skills Inventory to capture final year science students' perceptions of their QS across multiple indicators, at two Australian research-intensive universities. Statistical analysis reveals several variables predicting higher levels of self-rated competence in QS: students' grade point average, students' perceptions of inclusion of QS in the science degree programme, their confidence in QS, and their belief that QS will be useful in the future. The findings are discussed in terms of implications for designing science curricula more effectively to build students' QS throughout science degree programmes. Suggestions for further research are offered.

  17. [Pregnant women's attitudes towards the acceptable age limits for conceiving and giving birth to a child].

    PubMed

    Dakov, T; Dimitrova, V; Todorov, T

    2014-01-01

    To assess whether there are socially determined permissible and desirable age limits for conceiving and childbirth among pregnant women in Bulgaria and their relation to age, general and obstetrical medical history, method of conception, level of education and whether pregnancy has been postponed or not. 388 patients from the Fetal Medicine Clinic of the State University Hospital "Maichin Dom" in Sofia were provided with anonymous questionnaires, containing 38 questions. Two of the questions were essensial: 1) "What is the maximal permissible age for a woman to become pregnant and give birth to a child?". 2) "What is the maximal desirable age for a woman to become pregnant and deliver the planned numberof children?". The questionnaire contained also 23 questions related to the demographic characteristics of the participants and to their general and obstetric medical history. Data were processed with SPSS 13.0 statistical package. Descriptive and comparative analysis was performed after grouping according to one or mare chracteristics. P values < 0.05 were considered statistically significant. 54.2% (208/388) of the respondents determined a limit of the maximal permissible age for woman to conceive and give birth to a child. 53.4% (111/208) of them set the age limit of 40 years (28.9% of all patients). 63.6% (245/388) of the interrogated set a desirable age limit for conception and giving birth. Among then 82.9% (203/245) have set the limit at 40 years. The factors that influenced significantly the attitude towards the permissible age forconception/giving birth were the mode of conception, age and the level of education. Patients who had conceived spontaneously and had higher educational level were more confident when assessing the permissible age for conception/giving birth. Patients who had conceived by IVF/ICSI were significantly less confident answering the questions about age limits. The understanding for the permissible age for conception was not influenced by past obstetric history, deliberate postponemend of reproductive plans and the presence of chronic medical disorders. The understanding that pregnancy is always permissible (irrespective of age) was not influenced significantly by any of the factors. The understanding about the desirable age for conceiving/giving birth was influenced significantly only by the educational level--patients with higher degree of education were more confident in setting a desirable age limit.

  18. Uncertainties in Galactic Chemical Evolution Models

    DOE PAGES

    Cote, Benoit; Ritter, Christian; Oshea, Brian W.; ...

    2016-06-15

    Here we use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number ofmore » SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cote, Benoit; Ritter, Christian; Oshea, Brian W.

    Here we use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number ofmore » SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions.« less

  20. Importance of Abnormal Chloride Homeostasis in Stable Chronic Heart Failure.

    PubMed

    Grodin, Justin L; Verbrugge, Frederik H; Ellis, Stephen G; Mullens, Wilfried; Testani, Jeffrey M; Tang, W H Wilson

    2016-01-01

    The aim of this analysis was to determine the long-term prognostic value of lower serum chloride in patients with stable chronic heart failure. Electrolyte abnormalities are prevalent in patients with chronic heart failure. Little is known regarding the prognostic implications of lower serum chloride. Serum chloride was measured in 1673 consecutively consented stable patients with a history of heart failure undergoing elective diagnostic coronary angiography. All patients were followed for 5-year all-cause mortality, and survival models were adjusted for variables that confounded the chloride-risk relationship. The average chloride level was 102 ± 4 mEq/L. Over 6772 person-years of follow-up, there were 547 deaths. Lower chloride (per standard deviation decrease) was associated with a higher adjusted risk of mortality (hazard ratio 1.29, 95% confidence interval 1.12-1.49; P < 0.001). Chloride levels net-reclassified risk in 10.4% (P = 0.03) when added to a multivariable model (with a resultant C-statistic of 0.70), in which sodium levels were not prognostic (P = 0.30). In comparison to those with above first quartile chloride (≥ 101 mEq/L) and sodium (≥ 138 meq/L), subjects with first quartile chloride had a higher adjusted mortality risk, whether they had first quartile sodium (hazard ratio 1.35, 95% confidence interval 1.08-1.69; P = 0.008) or higher (hazard ratio 1.43, 95% confidence interval 1.12-1.85; P = 0.005). However, subjects with first quartile sodium but above first quartile chloride had no association with mortality (P = 0.67). Lower serum chloride levels are independently and incrementally associated with increased mortality risk in patients with chronic heart failure. A better understanding of the biological role of serum chloride is warranted. © 2015 American Heart Association, Inc.

  1. Identifying Minefields and Verifying Clearance: Adapting Statistical Methods for UXO Target Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, Richard O.; O'Brien, Robert F.; Wilson, John E.

    2003-09-01

    It may not be feasible to completely survey large tracts of land suspected of containing minefields. It is desirable to develop a characterization protocol that will confidently identify minefields within these large land tracts if they exist. Naturally, surveying areas of greatest concern and most likely locations would be necessary but will not provide the needed confidence that an unknown minefield had not eluded detection. Once minefields are detected, methods are needed to bound the area that will require detailed mine detection surveys. The US Department of Defense Strategic Environmental Research and Development Program (SERDP) is sponsoring the development ofmore » statistical survey methods and tools for detecting potential UXO targets. These methods may be directly applicable to demining efforts. Statistical methods are employed to determine the optimal geophysical survey transect spacing to have confidence of detecting target areas of a critical size, shape, and anomaly density. Other methods under development determine the proportion of a land area that must be surveyed to confidently conclude that there are no UXO present. Adaptive sampling schemes are also being developed as an approach for bounding the target areas. These methods and tools will be presented and the status of relevant research in this area will be discussed.« less

  2. Kinetic model for microbial growth and desulphurisation with Enterobacter sp.

    PubMed

    Liu, Long; Guo, Zhiguo; Lu, Jianjiang; Xu, Xiaolin

    2015-02-01

    Biodesulphurisation was investigated by using Enterobacter sp. D4, which can selectively desulphurise and convert dibenzothiophene into 2-hydroxybiphenyl (2-HBP). The experimental values of growth, substrate consumption and product generation were obtained at 95 % confidence level of the fitted values using three models: Hinshelwood equation, Luedeking-Piret and Luedeking-Piret-like equations. The average error values between experimental values and fitted values were less than 10 %. These kinetic models describe all the experimental data with good statistical parameters. The production of 2-HBP in Enterobacter sp. was by "coupled growth".

  3. SPIPS: Spectro-Photo-Interferometry of Pulsating Stars

    NASA Astrophysics Data System (ADS)

    Mérand, Antoine

    2017-10-01

    SPIPS (Spectro-Photo-Interferometry of Pulsating Stars) combines radial velocimetry, interferometry, and photometry to estimate physical parameters of pulsating stars, including presence of infrared excess, color excess, Teff, and ratio distance/p-factor. The global model-based parallax-of-pulsation method is implemented in Python. Derived parameters have a high level of confidence; statistical precision is improved (compared to other methods) due to the large number of data taken into account, accuracy is improved by using consistent physical modeling and reliability of the derived parameters is strengthened by redundancy in the data.

  4. [Gas chromatographic isolation of chloropicrin in drinking water].

    PubMed

    Malysheva, A G; Sotnikov, E E; Moskovkin, A S; Kamenetskaia, D B

    2004-01-01

    Gas chromatographic method has been developed to identify chloropicrin in the drinking water, which is based on its separation from water by statistic gas extraction and on the analysis of equilibrium vapor phase on a capillary column with electron-capture and nitrogen-phosphorus detectors connected in series. The method allows chloropicrin to be detected at the level of 5 mg/dm3 with a total measurement error of +/- 10% at a confidence probability of 0.95. The paper shows that the sensitivity of identification can be significantly increased.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sigeti, David E.; Pelak, Robert A.

    We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis withmore » an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a general beta-function prior for {theta}, enabling sequential analysis in which a small number of new simulations may be done and the resulting posterior for {theta} used as a prior to inform the next stage of power analysis.« less

  6. Ethnic disparities in the risk of colorectal adenomas associated with lipid levels: a retrospective multiethnic study.

    PubMed

    Davis-Yadley, Ashley H; Lipka, Seth; Shen, Huafeng; Devanney, Valerie; Swarup, Supreeya; Barnowsky, Alex; Silpe, Jeff; Mosdale, Josh; Pan, Qinshi; Fridlyand, Svetlana; Sreeharshan, Suhas; Abraham, Albin; Viswanathan, Prakash; Krishnamachari, Bhuma

    2015-03-01

    Although data exists showing that uncontrolled lipid levels in white and black patients is associated with colorectal adenomas, there are currently no studies looking only at the Hispanic population. With the rapid increase in the Hispanic population, we aimed to look at their risk of colorectal adenomas in association with lipid levels. We retrospectively analyzed 1473 patients undergoing colonoscopy from 2009 to 2011 at a community hospital. Statistical analysis was performed using Chi-squared for categorical variables and t test for continuous variables with age-, gender-, and race-adjusted odds ratios. Unconditional logistic regression model was used to estimate 95 % confidence intervals (CI). SAS 9.3 software was used to perform all statistical analysis. In our general population, there was an association with elevated triglyceride levels greater than 150 and presence of multiple colorectal adenomas with odds ratio (OR) 1.60 (1.03, 2.48). There was an association with proximal colon adenomas and cholesterol levels between 200 and 239 with OR 1.57 (1.07, 2.30), and low-density lipoprotein (LDL) levels of greater than 130 with OR 1.54 (1.04, 2.30). There was no association between high-density lipoproteins (HDL) levels and colorectal adenomas. The Hispanic population showed no statistical correlation between elevated triglycerides, cholesterol, or LDL with the presence, size, location, or multiplicity of colorectal adenomas. We found a significant correlation between elevated lipid levels and colorectal adenomas in white and black patients; however, there was no such association in the Hispanic population. This finding can possibly be due to environmental factors such as dietary, colonic flora, or genetic susceptibility, which fosters further investigation and research.

  7. An evaluation of G-protein coupled membrane estrogen receptor-1 level in stuttering.

    PubMed

    Bilal, Nagihan; Kurutas, Ergül Belge; Orhan, Israfil

    2018-02-01

    Stuttering is a widespread but little understood disease. There has been a recent increase in neuropathophysiological, genetic, and biochemical studies related to the etiopathogenesis. As developmental stuttering continues in adult males, hormonal factors are thought to have an effect. In this study, an evaluation was made for the first time of serum GPER-1 level in patients with a stutter. Prospective case control. The study included 30 patients with a stutter, aged < 18 years, and 35 age-matched children as the control group. The Stuttering Severity Instrument-3 form was administered to the patients. Evaluations were made of serum GPER-1, TSH, estradiol, prolactin, and progesterone and testosterone levels. GPER-1 level was determined as 0.51 (0.42-0.67) ng/mL in the patients and as 0.19 (0.13-0.25) ng/mL in the control group, and the difference was statistically significant (p < 0.001). A statistically significant difference was determined between genders with GPER-1 level of 0.56 (0.44-0.68) ng/mL in the male stuttering patient group and 0.44 (0.35-0.49) ng/mL in the female patient group (p = 0.026). Differential diagnosis with ROC analysis for the serum GPER-1 levels was statistically significant [Area under the ROC curve (AUC): 0.998, confidence interval, CI 0.992-1.000, p < 0.001]. The GPER-1 levels of the stuttering patients were found to be higher than those of the control group and GPER-1 levels of male patients were higher than those of females. As GPER-1 has high sensitivity and sensitivity, it could be considered important in the diagnosis and treatment of stuttering.

  8. Exact Scheffé-type confidence intervals for output from groundwater flow models: 1. Use of hydrogeologic information

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.

  9. Statistical procedures for determination and verification of minimum reporting levels for drinking water methods.

    PubMed

    Winslow, Stephen D; Pepich, Barry V; Martin, John J; Hallberg, George R; Munch, David J; Frebis, Christopher P; Hedrick, Elizabeth J; Krop, Richard A

    2006-01-01

    The United States Environmental Protection Agency's Office of Ground Water and Drinking Water has developed a single-laboratory quantitation procedure: the lowest concentration minimum reporting level (LCMRL). The LCMRL is the lowest true concentration for which future recovery is predicted to fall, with high confidence (99%), between 50% and 150%. The procedure takes into account precision and accuracy. Multiple concentration replicates are processed through the entire analytical method and the data are plotted as measured sample concentration (y-axis) versus true concentration (x-axis). If the data support an assumption of constant variance over the concentration range, an ordinary least-squares regression line is drawn; otherwise, a variance-weighted least-squares regression is used. Prediction interval lines of 99% confidence are drawn about the regression. At the points where the prediction interval lines intersect with data quality objective lines of 50% and 150% recovery, lines are dropped to the x-axis. The higher of the two values is the LCMRL. The LCMRL procedure is flexible because the data quality objectives (50-150%) and the prediction interval confidence (99%) can be varied to suit program needs. The LCMRL determination is performed during method development only. A simpler procedure for verification of data quality objectives at a given minimum reporting level (MRL) is also presented. The verification procedure requires a single set of seven samples taken through the entire method procedure. If the calculated prediction interval is contained within data quality recovery limits (50-150%), the laboratory performance at the MRL is verified.

  10. Confidence crisis of results in biomechanics research.

    PubMed

    Knudson, Duane

    2017-11-01

    Many biomechanics studies have small sample sizes and incorrect statistical analyses, so reporting of inaccurate inferences and inflated magnitude of effects are common in the field. This review examines these issues in biomechanics research and summarises potential solutions from research in other fields to increase the confidence in the experimental effects reported in biomechanics. Authors, reviewers and editors of biomechanics research reports are encouraged to improve sample sizes and the resulting statistical power, improve reporting transparency, improve the rigour of statistical analyses used, and increase the acceptance of replication studies to improve the validity of inferences from data in biomechanics research. The application of sports biomechanics research results would also improve if a larger percentage of unbiased effects and their uncertainty were reported in the literature.

  11. Social Information Is Integrated into Value and Confidence Judgments According to Its Reliability.

    PubMed

    De Martino, Benedetto; Bobadilla-Suarez, Sebastian; Nouguchi, Takao; Sharot, Tali; Love, Bradley C

    2017-06-21

    How much we like something, whether it be a bottle of wine or a new film, is affected by the opinions of others. However, the social information that we receive can be contradictory and vary in its reliability. Here, we tested whether the brain incorporates these statistics when judging value and confidence. Participants provided value judgments about consumer goods in the presence of online reviews. We found that participants updated their initial value and confidence judgments in a Bayesian fashion, taking into account both the uncertainty of their initial beliefs and the reliability of the social information. Activity in dorsomedial prefrontal cortex tracked the degree of belief update. Analogous to how lower-level perceptual information is integrated, we found that the human brain integrates social information according to its reliability when judging value and confidence. SIGNIFICANCE STATEMENT The field of perceptual decision making has shown that the sensory system integrates different sources of information according to their respective reliability, as predicted by a Bayesian inference scheme. In this work, we hypothesized that a similar coding scheme is implemented by the human brain to process social signals and guide complex, value-based decisions. We provide experimental evidence that the human prefrontal cortex's activity is consistent with a Bayesian computation that integrates social information that differs in reliability and that this integration affects the neural representation of value and confidence. Copyright © 2017 De Martino et al.

  12. Solid recovered fuel: influence of waste stream composition and processing on chlorine content and fuel quality.

    PubMed

    Velis, Costas; Wagland, Stuart; Longhurst, Phil; Robson, Bryce; Sinfield, Keith; Wise, Stephen; Pollard, Simon

    2012-02-07

    Solid recovered fuel (SRF) produced by mechanical-biological treatment (MBT) of municipal waste can replace fossil fuels, being a CO(2)-neutral, affordable, and alternative energy source. SRF application is limited by low confidence in quality. We present results for key SRF properties centered on the issue of chlorine content. A detailed investigation involved sampling, statistical analysis, reconstruction of composition, and modeling of SRF properties. The total chlorine median for a typical plant during summer operation was 0.69% w/w(d), with lower/upper 95% confidence intervals of 0.60% w/w(d) and 0.74% w/w(d) (class 3 of CEN Cl indicator). The average total chlorine can be simulated, using a reconciled SRF composition before shredding to <40 mm. The relative plastics vs paper mass ratios in particular result in an SRF with a 95% upper confidence limit for ash content marginally below the 20% w/w(d) deemed suitable for certain power plants; and a lower 95% confidence limit of net calorific value (NCV) at 14.5 MJ kg(ar)(-1). The data provide, for the first time, a high level of confidence on the effects of SRF composition on its chlorine content, illustrating interrelationships with other fuel properties. The findings presented here allow rational debate on achievable vs desirable MBT-derived SRF quality, informing the development of realistic SRF quality specifications, through modeling exercises, needed for effective thermal recovery.

  13. Women׳s birthplace decision-making, the role of confidence: Part of the Evaluating Maternity Units study, New Zealand.

    PubMed

    Grigg, Celia P; Tracy, Sally K; Schmied, Virginia; Daellenbach, Rea; Kensington, Mary

    2015-06-01

    to explore women׳s birthplace decision-making and identify the factors which enable women to plan to give birth in a freestanding midwifery-led primary level maternity unit rather than in an obstetric-led tertiary level maternity hospital in New Zealand. a mixed methods prospective cohort design. data from eight focus groups (37 women) and a six week postpartum survey (571 women, 82%) were analysed using thematic analysis and descriptive statistics. The qualitative data from the focus groups and survey were the primary data sources and were integrated at the analysis stage; and the secondary qualitative and quantitative data were integrated at the interpretation stage. Christchurch, New Zealand, with one tertiary maternity hospital and four primary level maternity units (2010-2012). well (at 'low risk' of developing complications), pregnant women booked to give birth in one of the primary units or the tertiary hospital. All women received midwifery continuity of care, regardless of their intended or actual birthplace. five core themes were identified: the birth process, women׳s self-belief in their ability to give birth, midwives, the health system and birth place. 'Confidence' was identified as the overarching concept influencing the themes. Women who chose to give birth in a primary maternity unit appeared to differ markedly in their beliefs regarding their optimal birthplace compared to women who chose to give birth in a tertiary maternity hospital. The women who planned a primary maternity unit birth expressed confidence in the birth process, their ability to give birth, their midwife, the maternity system and/or the primary unit itself. The women planning to give birth in a tertiary hospital did not express confidence in the birth process, their ability to give birth, the system for transfers and/or the primary unit as a birthplace, although they did express confidence in their midwife. birthplace is a profoundly important aspect of women׳s experience of childbirth. Birthplace decision-making is complex, in common with many other aspects of childbirth. A multiplicity of factors needs converge in order for all those involved to gain the confidence required to plan what, in this context, might be considered a 'countercultural' decision to give birth at a midwife-led primary maternity unit. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Eruption patterns of the chilean volcanoes Villarrica, Llaima, and Tupungatito

    NASA Astrophysics Data System (ADS)

    Muñoz, Miguel

    1983-09-01

    The historical eruption records of three Chilean volcanoes have been subjected to many statistical tests, and none have been found to differ significantly from random, or Poissonian, behaviour. The statistical analysis shows rough conformity with the descriptions determined from the eruption rate functions. It is possible that a constant eruption rate describes the activity of Villarrica; Llaima and Tupungatito present complex eruption rate patterns that appear, however, to have no statistical significance. Questions related to loading and extinction processes and to the existence of shallow secondary magma chambers to which magma is supplied from a deeper system are also addressed. The analysis and the computation of the serial correlation coefficients indicate that the three series may be regarded as stationary renewal processes. None of the test statistics indicates rejection of the Poisson hypothesis at a level less than 5%, but the coefficient of variation for the eruption series at Llaima is significantly different from the value expected for a Poisson process. Also, the estimates of the normalized spectrum of the counting process for the three series suggest a departure from the random model, but the deviations are not found to be significant at the 5% level. Kolmogorov-Smirnov and chi-squared test statistics, applied directly to ascertaining to which probability P the random Poisson model fits the data, indicate that there is significant agreement in the case of Villarrica ( P=0.59) and Tupungatito ( P=0.3). Even though the P-value for Llaima is a marginally significant 0.1 (which is equivalent to rejecting the Poisson model at the 90% confidence level), the series suggests that nonrandom features are possibly present in the eruptive activity of this volcano.

  15. Validation of Scores from a New Measure of Preservice Teachers' Self-Efficacy to Teach Statistics in the Middle Grades

    ERIC Educational Resources Information Center

    Harrell-Williams, Leigh M.; Sorto, M. Alejandra; Pierce, Rebecca L.; Lesser, Lawrence M.; Murphy, Teri J.

    2014-01-01

    The influential "Common Core State Standards for Mathematics" (CCSSM) expect students to start statistics learning during middle grades. Thus teacher education and professional development programs are advised to help preservice and in-service teachers increase their knowledge and confidence to teach statistics. Although existing…

  16. Prognostic value of fasting versus nonfasting low-density lipoprotein cholesterol levels on long-term mortality: insight from the National Health and Nutrition Examination Survey III (NHANES-III).

    PubMed

    Doran, Bethany; Guo, Yu; Xu, Jinfeng; Weintraub, Howard; Mora, Samia; Maron, David J; Bangalore, Sripal

    2014-08-12

    National and international guidelines recommend fasting lipid panel measurement for risk stratification of patients for prevention of cardiovascular events. However, the prognostic value of fasting versus nonfasting low-density lipoprotein cholesterol (LDL-C) is uncertain. Patients enrolled in the National Health and Nutrition Examination Survey III (NHANES-III), a nationally representative cross-sectional survey performed from 1988 to 1994, were stratified on the basis of fasting status (≥8 or <8 hours) and followed for a mean of 14.0 (±0.22) years. Propensity score matching was used to assemble fasting and nonfasting cohorts with similar baseline characteristics. The risk of outcomes as a function of LDL-C and fasting status was assessed with the use of receiver operating characteristic curves and bootstrapping methods. The interaction between fasting status and LDL-C was assessed with Cox proportional hazards modeling. Primary outcome was all-cause mortality. Secondary outcome was cardiovascular mortality. One-to-one matching based on propensity score yielded 4299 pairs of fasting and nonfasting individuals. For the primary outcome, fasting LDL-C yielded prognostic value similar to that for nonfasting LDL-C (C statistic=0.59 [95% confidence interval, 0.57-0.61] versus 0.58 [95% confidence interval, 0.56-0.60]; P=0.73), and LDL-C by fasting status interaction term in the Cox proportional hazards model was not significant (Pinteraction=0.11). Similar results were seen for the secondary outcome (fasting versus nonfasting C statistic=0.62 [95% confidence interval, 0.60-0.66] versus 0.62 [95% confidence interval, 0.60-0.66]; P=0.96; Pinteraction=0.34). Nonfasting LDL-C has prognostic value similar to that of fasting LDL-C. National and international agencies should consider reevaluating the recommendation that patients fast before obtaining a lipid panel. © 2014 American Heart Association, Inc.

  17. Peer-driven contraceptive choices and preferences for contraceptive methods among students of tertiary educational institutions in Enugu, Nigeria.

    PubMed

    Iyoke, Ca; Ezugwu, Fo; Lawani, Ol; Ugwu, Go; Ajah, Lo; Mba, Sg

    2014-01-01

    To describe the methods preferred for contraception, evaluate preferences and adherence to modern contraceptive methods, and determine the factors associated with contraceptive choices among tertiary students in South East Nigeria. A questionnaire-based cross-sectional study of sexual habits, knowledge of contraceptive methods, and patterns of contraceptive choices among a pooled sample of unmarried students from the three largest tertiary educational institutions in Enugu city, Nigeria was done. Statistical analysis involved descriptive and inferential statistics at the 95% level of confidence. A total of 313 unmarried students were studied (194 males; 119 females). Their mean age was 22.5±5.1 years. Over 98% of males and 85% of females made their contraceptive choices based on information from peers. Preferences for contraceptive methods among female students were 49.2% for traditional methods of contraception, 28% for modern methods, 10% for nonpharmacological agents, and 8% for off-label drugs. Adherence to modern contraceptives among female students was 35%. Among male students, the preference for the male condom was 45.2% and the adherence to condom use was 21.7%. Multivariate analysis showed that receiving information from health personnel/media/workshops (odds ratio 9.54, 95% confidence interval 3.5-26.3), health science-related course of study (odds ratio 3.5, 95% confidence interval 1.3-9.6), and previous sexual exposure prior to university admission (odds ratio 3.48, 95% confidence interval 1.5-8.0) all increased the likelihood of adherence to modern contraceptive methods. An overwhelming reliance on peers for contraceptive information in the context of poor knowledge of modern methods of contraception among young people could have contributed to the low preferences and adherence to modern contraceptive methods among students in tertiary educational institutions. Programs to reduce risky sexual behavior among these students may need to focus on increasing the content and adequacy of contraceptive information held by people through regular health worker-led, on-campus workshops.

  18. Searching for the 3.5 keV Line in the Stacked Suzaku Observations of Galaxy Clusters

    NASA Technical Reports Server (NTRS)

    Bulbul, Esra; Markevitch, Maxim; Foster, Adam; Miller, Eric; Bautz, Mark; Lowenstein, Mike; Randall, Scott W.; Smith, Randall K.

    2016-01-01

    We perform a detailed study of the stacked Suzaku observations of 47 galaxy clusters, spanning a redshift range of 0.01-0.45, to search for the unidentified 3.5 keV line. This sample provides an independent test for the previously detected line. We detect a 2sigma-significant spectral feature at 3.5 keV in the spectrum of the full sample. When the sample is divided into two subsamples (cool-core and non-cool core clusters), the cool-core subsample shows no statistically significant positive residuals at the line energy. A very weak (approx. 2sigma confidence) spectral feature at 3.5 keV is permitted by the data from the non-cool-core clusters sample. The upper limit on a neutrino decay mixing angle of sin(sup 2)(2theta) = 6.1 x 10(exp -11) from the full Suzaku sample is consistent with the previous detections in the stacked XMM-Newton sample of galaxy clusters (which had a higher statistical sensitivity to faint lines), M31, and Galactic center, at a 90% confidence level. However, the constraint from the present sample, which does not include the Perseus cluster, is in tension with previously reported line flux observed in the core of the Perseus cluster with XMM-Newton and Suzaku.

  19. The effectiveness of nurses' ability to interpret basic electrocardiogram strips accurately using different learning modalities.

    PubMed

    Spiva, LeeAnna; Johnson, Kimberly; Robertson, Bethany; Barrett, Darcy T; Jarrell, Nicole M; Hunter, Donna; Mendoza, Inocencia

    2012-02-01

    Historically, the instructional method of choice has been traditional lecture or face-to-face education; however, changes in the health care environment, including resource constraints, have necessitated examination of this practice. A descriptive pre-/posttest method was used to determine the effectiveness of alternative teaching modalities on nurses' knowledge and confidence in electrocardiogram (EKG) interpretation. A convenience sample of 135 nurses was recruited in an integrated health care system in the Southeastern United States. Nurses attended an instructor-led course, an online learning (e-learning) platform with no study time or 1 week of study time, or an e-learning platform coupled with a 2-hour post-course instructor-facilitated debriefing with no study time or 1 week of study time. Instruments included a confidence scale, an online EKG test, and a course evaluation. Statistically significant differences in knowledge and confidence were found for individual groups after nurses participated in the intervention. Statistically significant differences were found in pre-knowledge and post-confidence when groups were compared. Organizations that use various instructional methods to educate nurses in EKG interpretation can use different teaching modalities without negatively affecting nurses' knowledge or confidence in this skill. Copyright 2012, SLACK Incorporated.

  20. How conservative is Fisher's exact test? A quantitative evaluation of the two-sample comparative binomial trial.

    PubMed

    Crans, Gerald G; Shuster, Jonathan J

    2008-08-15

    The debate as to which statistical methodology is most appropriate for the analysis of the two-sample comparative binomial trial has persisted for decades. Practitioners who favor the conditional methods of Fisher, Fisher's exact test (FET), claim that only experimental outcomes containing the same amount of information should be considered when performing analyses. Hence, the total number of successes should be fixed at its observed level in hypothetical repetitions of the experiment. Using conditional methods in clinical settings can pose interpretation difficulties, since results are derived using conditional sample spaces rather than the set of all possible outcomes. Perhaps more importantly from a clinical trial design perspective, this test can be too conservative, resulting in greater resource requirements and more subjects exposed to an experimental treatment. The actual significance level attained by FET (the size of the test) has not been reported in the statistical literature. Berger (J. R. Statist. Soc. D (The Statistician) 2001; 50:79-85) proposed assessing the conservativeness of conditional methods using p-value confidence intervals. In this paper we develop a numerical algorithm that calculates the size of FET for sample sizes, n, up to 125 per group at the two-sided significance level, alpha = 0.05. Additionally, this numerical method is used to define new significance levels alpha(*) = alpha+epsilon, where epsilon is a small positive number, for each n, such that the size of the test is as close as possible to the pre-specified alpha (0.05 for the current work) without exceeding it. Lastly, a sample size and power calculation example are presented, which demonstrates the statistical advantages of implementing the adjustment to FET (using alpha(*) instead of alpha) in the two-sample comparative binomial trial. 2008 John Wiley & Sons, Ltd

  1. Distributional fold change test – a statistical approach for detecting differential expression in microarray experiments

    PubMed Central

    2012-01-01

    Background Because of the large volume of data and the intrinsic variation of data intensity observed in microarray experiments, different statistical methods have been used to systematically extract biological information and to quantify the associated uncertainty. The simplest method to identify differentially expressed genes is to evaluate the ratio of average intensities in two different conditions and consider all genes that differ by more than an arbitrary cut-off value to be differentially expressed. This filtering approach is not a statistical test and there is no associated value that can indicate the level of confidence in the designation of genes as differentially expressed or not differentially expressed. At the same time the fold change by itself provide valuable information and it is important to find unambiguous ways of using this information in expression data treatment. Results A new method of finding differentially expressed genes, called distributional fold change (DFC) test is introduced. The method is based on an analysis of the intensity distribution of all microarray probe sets mapped to a three dimensional feature space composed of average expression level, average difference of gene expression and total variance. The proposed method allows one to rank each feature based on the signal-to-noise ratio and to ascertain for each feature the confidence level and power for being differentially expressed. The performance of the new method was evaluated using the total and partial area under receiver operating curves and tested on 11 data sets from Gene Omnibus Database with independently verified differentially expressed genes and compared with the t-test and shrinkage t-test. Overall the DFC test performed the best – on average it had higher sensitivity and partial AUC and its elevation was most prominent in the low range of differentially expressed features, typical for formalin-fixed paraffin-embedded sample sets. Conclusions The distributional fold change test is an effective method for finding and ranking differentially expressed probesets on microarrays. The application of this test is advantageous to data sets using formalin-fixed paraffin-embedded samples or other systems where degradation effects diminish the applicability of correlation adjusted methods to the whole feature set. PMID:23122055

  2. Instability of self-esteem, self-confidence, self-liking, self-control, self-competence and perfectionism: associations with oral health status and oral health-related behaviours.

    PubMed

    Dumitrescu, A L; Zetu, L; Teslaru, S

    2012-02-01

    Our aim was to explore whether instability of self-esteem, self-confidence, self-liking, self-control, self-competence and perfectionism each has an independent contribution to the self-rated oral health and oral health-related behaviours. A cross-sectional study design was used. Data were collected between November 2008 and May 2009. The sample consisted of 205 Romanian adults (mean age: 29.84 years; 65.2% women; 40% married) who were a random population drawn consecutively from the registry file of two private dental practices in the Iasi area. The questionnaire included information about demographic, psychological, self-reported oral health and oral health-related behaviour items. The comparison of participants who never flossed their teeth with those who flossed everyday showed statistically significant lower levels of self-confidence (P < 0.05), self-liking (P = 0.001), self-competence (P < 0.0001), self-control (P < 0.05) and Perfectionism Scores (P < 0.05). Significant higher levels of self-competence were scored in persons who used weekly mouthrinses comparing with never users (P = 0.012). Also patients who visited the dentist mainly when treatment is needed or when pain presented lower levels of self-competence and self-control comparing with those who visited the dentist mainly for check-up or for tooth cleaning and scaling (P < 0.05). Oral health behaviours (toothbrushing and mouthrinse frequencies) were predicted by multiple regression analyses using sociodemographic (age, gender), self-competence and perfectionism variables. Our study showed that instability of self-esteem, self-confidence, self-competence, self-liking, self-control and perfectionism was associated not only with self-rated dental health but also with oral health behaviours. Understanding the psychological factors associated with oral hygiene can further the development and improvement in therapeutic strategies to be used in oral health-improving programs, as well as of programs aimed at prevention and education. © 2011 John Wiley & Sons A/S.

  3. Interpreting “statistical hypothesis testing” results in clinical research

    PubMed Central

    Sarmukaddam, Sanjeev B.

    2012-01-01

    Difference between “Clinical Significance and Statistical Significance” should be kept in mind while interpreting “statistical hypothesis testing” results in clinical research. This fact is already known to many but again pointed out here as philosophy of “statistical hypothesis testing” is sometimes unnecessarily criticized mainly due to failure in considering such distinction. Randomized controlled trials are also wrongly criticized similarly. Some scientific method may not be applicable in some peculiar/particular situation does not mean that the method is useless. Also remember that “statistical hypothesis testing” is not for decision making and the field of “decision analysis” is very much an integral part of science of statistics. It is not correct to say that “confidence intervals have nothing to do with confidence” unless one understands meaning of the word “confidence” as used in context of confidence interval. Interpretation of the results of every study should always consider all possible alternative explanations like chance, bias, and confounding. Statistical tests in inferential statistics are, in general, designed to answer the question “How likely is the difference found in random sample(s) is due to chance” and therefore limitation of relying only on statistical significance in making clinical decisions should be avoided. PMID:22707861

  4. Statistical properties of relative weight distributions of four salmonid species and their sampling implications

    USGS Publications Warehouse

    Hyatt, M.W.; Hubert, W.A.

    2001-01-01

    We assessed relative weight (Wr) distributions among 291 samples of stock-to-quality-length brook trout Salvelinus fontinalis, brown trout Salmo trutta, rainbow trout Oncorhynchus mykiss, and cutthroat trout O. clarki from lentic and lotic habitats. Statistics describing Wr sample distributions varied slightly among species and habitat types. The average sample was leptokurtotic and slightly skewed to the right with a standard deviation of about 10, but the shapes of Wr distributions varied widely among samples. Twenty-two percent of the samples had nonnormal distributions, suggesting the need to evaluate sample distributions before applying statistical tests to determine whether assumptions are met. In general, our findings indicate that samples of about 100 stock-to-quality-length fish are needed to obtain confidence interval widths of four Wr units around the mean. Power analysis revealed that samples of about 50 stock-to-quality-length fish are needed to detect a 2% change in mean Wr at a relatively high level of power (beta = 0.01, alpha = 0.05).

  5. The Hanford Thyroid Disease Study: an alternative view of the findings.

    PubMed

    Hoffman, F Owen; Ruttenber, A James; Apostoaei, A Iulian; Carroll, Raymond J; Greenland, Sander

    2007-02-01

    The Hanford Thyroid Disease Study (HTDS) is one of the largest and most complex epidemiologic studies of the relation between environmental exposures to I and thyroid disease. The study detected no dose-response relation using a 0.05 level for statistical significance. The results for thyroid cancer appear inconsistent with those from other studies of populations with similar exposures, and either reflect inadequate statistical power, bias, or unique relations between exposure and disease risk. In this paper, we explore these possibilities, and present evidence that the HTDS statistical power was inadequate due to complex uncertainties associated with the mathematical models and assumptions used to reconstruct individual doses. We conclude that, at the very least, the confidence intervals reported by the HTDS for thyroid cancer and other thyroid diseases are too narrow because they fail to reflect key uncertainties in the measurement-error structure. We recommend that the HTDS results be interpreted as inconclusive rather than as evidence for little or no disease risk from Hanford exposures.

  6. Version 2.0 Visual Sample Plan (VSP): UXO Module Code Description and Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, Richard O.; Wilson, John E.; O'Brien, Robert F.

    2003-05-06

    The Pacific Northwest National Laboratory (PNNL) is developing statistical methods for determining the amount of geophysical surveys conducted along transects (swaths) that are needed to achieve specified levels of confidence of finding target areas (TAs) of anomalous readings and possibly unexploded ordnance (UXO) at closed, transferring and transferred (CTT) Department of Defense (DoD) ranges and other sites. The statistical methods developed by PNNL have been coded into the UXO module of the Visual Sample Plan (VSP) software code that is being developed by PNNL with support from the DoD, the U.S. Department of Energy (DOE, and the U.S. Environmental Protectionmore » Agency (EPA). (The VSP software and VSP Users Guide (Hassig et al, 2002) may be downloaded from http://dqo.pnl.gov/vsp.) This report describes and documents the statistical methods developed and the calculations and verification testing that have been conducted to verify that VSPs implementation of these methods is correct and accurate.« less

  7. Can hospital episode statistics support appraisal and revalidation? Randomised study of physician attitudes.

    PubMed

    Croft, Giles P; Williams, John G; Mann, Robin Y; Cohen, David; Phillips, Ceri J

    2007-08-01

    Hospital episode statistics were originally designed to monitor activity and allocate resources in the NHS. Recently their uses have widened to include analysis of individuals' activity, to inform appraisal and revalidation, and monitor performance. This study investigated physician attitudes to the validity and usefulness of these data for such purposes, and the effect of supporting individuals in data interpretation. A randomised study was conducted with consultant physicians in England, Wales and Scotland. The intervention group was supported by a clinician and an information analyst in obtaining and analysing their own data. The control group was unsupported. Attitudes to the data and confidence in their ability to reflect clinical practice were examined before and after the intervention. It was concluded that hospital episode statistics are not presently fit for monitoring the performance of individual physicians. A more comprehensive description of activity is required for these purposes. Improvements in the quality of existing data through clinical engagement at a local level, however, are possible.

  8. TSP Symposium 2012 Proceedings

    DTIC Science & Technology

    2012-11-01

    and Statistical Model 78 7.3 Analysis and Results 79 7.4 Threats to Validity and Limitations 85 7.5 Conclusions 86 7.6 Acknowledgments 87 7.7...Table 12: Overall Statistics of the Experiment 32 Table 13: Results of Pairwise ANOVA Analysis, Highlighting Statistically Significant Differences...we calculated the percentage of defects injected. The distribution statistics are shown in Table 2. Table 2: Mean Lower, Upper Confidence Interval

  9. Confidence level estimation in multi-target classification problems

    NASA Astrophysics Data System (ADS)

    Chang, Shi; Isaacs, Jason; Fu, Bo; Shin, Jaejeong; Zhu, Pingping; Ferrari, Silvia

    2018-04-01

    This paper presents an approach for estimating the confidence level in automatic multi-target classification performed by an imaging sensor on an unmanned vehicle. An automatic target recognition algorithm comprised of a deep convolutional neural network in series with a support vector machine classifier detects and classifies targets based on the image matrix. The joint posterior probability mass function of target class, features, and classification estimates is learned from labeled data, and recursively updated as additional images become available. Based on the learned joint probability mass function, the approach presented in this paper predicts the expected confidence level of future target classifications, prior to obtaining new images. The proposed approach is tested with a set of simulated sonar image data. The numerical results show that the estimated confidence level provides a close approximation to the actual confidence level value determined a posteriori, i.e. after the new image is obtained by the on-board sensor. Therefore, the expected confidence level function presented in this paper can be used to adaptively plan the path of the unmanned vehicle so as to optimize the expected confidence levels and ensure that all targets are classified with satisfactory confidence after the path is executed.

  10. Students' perception and relationship between confidence and anxiety in teaching and learning mathematics: A case study in Sekolah Kebangsaan Bukit Kuda, Klang

    NASA Astrophysics Data System (ADS)

    Mohd Nordin, Noraimi Azlin; Md Tahir, Herniza; Kamis, Nor Hanimah; Khairul Azmi, Nurul Nisa'

    2013-04-01

    In general, Mathematics is one of the core subjects need to be learned by students regardless they are in primary and secondary schools. Different students might have different views and interests on Mathematics subjects. This is due to different level of thinking for each student. Students' acceptance and confidence level in learning Mathematics will depend on various factors among them. A program named "Mini Hari Matematik" was conducted in Sekolah Rendah Kebangsaan Bukit Kuda, Klang exclusively for 49 students of standard four, five and six to identify the students' perception and correlation between their confidence and anxiety in learning Mathematics. This program was intended to give exposure to the students on the importance of Mathematics in life and hence, develop their interest in learning Mathematics. We measure the students' perception on teaching and learning Mathematics using statistical approach based on SPSS. The analysis includes mean, variance, observations, correlation and so on. Based on the results obtained, it is found that there is a positive correlation between students' confidence and anxiety in learning Mathematics in their daily life. In addition, students are more attracted to Mathematics if this subject is blended with game elements in their teaching and learning process. As a conclusion, we can see that there are three basic foundations need to be developed in each of the students about Mathematics which are firstly, their early understanding on the subject itself, ability to communicate regarding this subject and how they apply this subject in decision making and problem solving. This program gives high benefit to the students in preparing them towards the science and technology era.

  11. Comparison of Practices, Knowledge, Confidence, and Attitude toward Oral Cancer among Oral Health Professionals between Japan and Australia.

    PubMed

    Haresaku, Satoru; Makino, Michiko; Sugiyama, Seiichi; Naito, Toru; Mariño, Rodrigo Jose

    2018-04-01

    The purpose of this study was to investigate the practices, knowledge, confidence, and attitude toward oral cancer among Japanese oral health professionals (J-OHPs) and to identify Japanese-specific problems in oral cancer practices by comparing them between Japan and Australia. A questionnaire survey regarding oral cancer practices among Australian oral health professionals (Au-OHPs) was conducted in Australia in 2014-2015. The questionnaire was translated into Japanese, and a Web-based questionnaire survey was conducted among 131 Japanese dentists (J-Dentists) and 131 dental hygienists (J-DHs) in 2016. To compare the J-OHPs' findings with the Au-OHPs', the data of Australian dentists (Au-dentists) and Australian dental hygienists (Au-DHs) were extracted from the Australian survey. Those findings were then compared via a statistical analysis. Eighty-two J-Dentists, 55 J-DHs, 214 Au-Dentists, and 45 Au-DHs participated in this study. Only 34.1 % of J-Dentists and 36.4 % of J-DHs performed oral cancer screenings on their patients; J-OHPs were significantly less likely to perform them than Au-OHPs. The level of knowledge and confidence regarding oral cancer among JOHPs were significantly lower than among Au-OHPs. About 90 % of J-OHPs felt that they needed additional training in oral cancer practices. Less than 40 % of J-OHPs performed oral cancer screenings in their patients. The low level of knowledge and confidence regarding oral cancer among JOHPs may contribute to their low performance of oral cancer practices. Therefore, further education and training programs for oral cancer practices should be provided to Japanese OHPs for the prevention and early detection of oral cancer.

  12. Validation of a Projection-domain Insertion of Liver Lesions into CT Images

    PubMed Central

    Chen, Baiyu; Ma, Chi; Leng, Shuai; Fidler, Jeff L.; Sheedy, Shannon P.; McCollough, Cynthia H.; Fletcher, Joel G.; Yu, Lifeng

    2016-01-01

    Rationale and Objectives The aim of this study was to validate a projection-domain lesion-insertion method with observer studies. Materials and Methods A total of 51 proven liver lesions were segmented from computed tomography images, forward projected, and inserted into patient projection data. The images containing inserted and real lesions were then reconstructed and examined in consensus by two radiologists. First, 102 lesions (51 original, 51 inserted) were viewed in a randomized, blinded fashion and scored from 1 (absolutely inserted) to 10 (absolutely real). Statistical tests were performed to compare the scores for inserted and real lesions. Subsequently, a two-alternative-forced-choice test was conducted, with lesions viewed in pairs (real vs. inserted) in a blinded fashion. The radiologists selected the inserted lesion and provided a confidence level of 1 (no confidence) to 5 (completely certain). The number of lesion pairs that were incorrectly classified was calculated. Results The scores for inserted and proven lesions had the same median (8) and similar interquartile ranges (inserted, 5.5–8; real, 6.5–8). The means scores were not significantly different between real and inserted lesions (P value = 0.17). The receiver operating characteristic curve was nearly diagonal, with an area under the curve of 0.58 ± 0.06. For the two-alternative-forced-choice study, the inserted lesions were incorrectly identified in 49% (25 out of 51) of pairs; radiologists were incorrect in 38% (3 out of 8) of pairs even when they felt very confident in identifying the inserted lesion (confidence level ≥4). Conclusions Radiologists could not distinguish between inserted and real lesions, thereby validating the lesion-insertion technique, which may be useful for conducting virtual clinical trials to optimize image quality and radiation dose. PMID:27432267

  13. Modeling and replicating statistical topology and evidence for CMB nonhomogeneity

    PubMed Central

    Agami, Sarit

    2017-01-01

    Under the banner of “big data,” the detection and classification of structure in extremely large, high-dimensional, data sets are two of the central statistical challenges of our times. Among the most intriguing new approaches to this challenge is “TDA,” or “topological data analysis,” one of the primary aims of which is providing nonmetric, but topologically informative, preanalyses of data which make later, more quantitative, analyses feasible. While TDA rests on strong mathematical foundations from topology, in applications, it has faced challenges due to difficulties in handling issues of statistical reliability and robustness, often leading to an inability to make scientific claims with verifiable levels of statistical confidence. We propose a methodology for the parametric representation, estimation, and replication of persistence diagrams, the main diagnostic tool of TDA. The power of the methodology lies in the fact that even if only one persistence diagram is available for analysis—the typical case for big data applications—the replications permit conventional statistical hypothesis testing. The methodology is conceptually simple and computationally practical, and provides a broadly effective statistical framework for persistence diagram TDA analysis. We demonstrate the basic ideas on a toy example, and the power of the parametric approach to TDA modeling in an analysis of cosmic microwave background (CMB) nonhomogeneity. PMID:29078301

  14. Self-Reported Recovery from 2-Week 12-Hour Shift Work Schedules: A 14-Day Follow-Up.

    PubMed

    Merkus, Suzanne L; Holte, Kari Anne; Huysmans, Maaike A; van de Ven, Peter M; van Mechelen, Willem; van der Beek, Allard J

    2015-09-01

    Recovery from fatigue is important in maintaining night workers' health. This study compared the course of self-reported recovery after 2-week 12-hour schedules consisting of either night shifts or swing shifts (i.e., 7 night shifts followed by 7 day shifts) to such schedules consisting of only day work. Sixty-one male offshore employees-20 night workers, 16 swing shift workers, and 25 day workers-rated six questions on fatigue (sleep quality, feeling rested, physical and mental fatigue, and energy levels; scale 1-11) for 14 days after an offshore tour. After the two night-work schedules, differences on the 1(st) day (main effects) and differences during the follow-up (interaction effects) were compared to day work with generalized estimating equations analysis. After adjustment for confounders, significant main effects were found for sleep quality for night workers (1.41, 95% confidence interval 1.05-1.89) and swing shift workers (1.42, 95% confidence interval 1.03-1.94) when compared to day workers; their interaction terms were not statistically significant. For the remaining fatigue outcomes, no statistically significant main or interaction effects were found. After 2-week 12-hour night and swing shifts, only the course for sleep quality differed from that of day work. Sleep quality was poorer for night and swing shift workers on the 1(st) day off and remained poorer for the 14-day follow-up. This showed that while working at night had no effect on feeling rested, tiredness, and energy levels, it had a relatively long-lasting effect on sleep quality.

  15. Statistical Measures of Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Vogeley, Michael; Geller, Margaret; Huchra, John; Park, Changbom; Gott, J. Richard

    1993-12-01

    \\inv Mpc} To quantify clustering in the large-scale distribution of galaxies and to test theories for the formation of structure in the universe, we apply statistical measures to the CfA Redshift Survey. This survey is complete to m_{B(0)}=15.5 over two contiguous regions which cover one-quarter of the sky and include ~ 11,000 galaxies. The salient features of these data are voids with diameter 30-50\\hmpc and coherent dense structures with a scale ~ 100\\hmpc. Comparison with N-body simulations rules out the ``standard" CDM model (Omega =1, b=1.5, sigma_8 =1) at the 99% confidence level because this model has insufficient power on scales lambda >30\\hmpc. An unbiased open universe CDM model (Omega h =0.2) and a biased CDM model with non-zero cosmological constant (Omega h =0.24, lambda_0 =0.6) match the observed power spectrum. The amplitude of the power spectrum depends on the luminosity of galaxies in the sample; bright (L>L(*) ) galaxies are more strongly clustered than faint galaxies. The paucity of bright galaxies in low-density regions may explain this dependence. To measure the topology of large-scale structure, we compute the genus of isodensity surfaces of the smoothed density field. On scales in the ``non-linear" regime, <= 10\\hmpc, the high- and low-density regions are multiply-connected over a broad range of density threshold, as in a filamentary net. On smoothing scales >10\\hmpc, the topology is consistent with statistics of a Gaussian random field. Simulations of CDM models fail to produce the observed coherence of structure on non-linear scales (>95% confidence level). The underdensity probability (the frequency of regions with density contrast delta rho //lineρ=-0.8) depends strongly on the luminosity of galaxies; underdense regions are significantly more common (>2sigma ) in bright (L>L(*) ) galaxy samples than in samples which include fainter galaxies.

  16. Confidence bands for measured economically optimal nitrogen rates

    USDA-ARS?s Scientific Manuscript database

    While numerous researchers have computed economically optimal N rate (EONR) values from measured yield – N rate data, nearly all have neglected to compute or estimate the statistical reliability of these EONR values. In this study, a simple method for computing EONR and its confidence bands is descr...

  17. Ground-water quality and effects of poultry confined animal feeding operations on shallow ground water, upper Shoal Creek basin, Southwest Missouri, 2000

    USGS Publications Warehouse

    Mugel, Douglas N.

    2002-01-01

    Forty-seven wells and 8 springs were sampled in May, October, and November 2000 in the upper Shoal Creek Basin, southwest Missouri, to determine if nutrient concentrations and fecal bacteria densities are increasing in the shallow aquifer as a result of poultry confined animal feeding operations (CAFOs). Most of the land use in the basin is agricultural, with cattle and hay production dominating; the number of poultry CAFOs has increased in recent years. Poultry waste (litter) is used as a source of nutrients on pasture land as much as several miles away from poultry barns.Most wells in the sample network were classified as ?P? wells, which were open only or mostly to the Springfield Plateau aquifer and where poultry litter was applied to a substantial acreage within 0.5 mile of the well both in spring 2000 and in several previous years; and ?Ag? wells, which were open only or mostly to the Springfield Plateau aquifer and which had limited or no association with poultry CAFOs. Water-quality data from wells and springs were grouped for statistical purposes as P1, Ag1, and Sp1 (May 2000 samples) and P2, Ag2, and Sp2 (October or November 2000 samples). The results of this study do not indicate that poultry CAFOs are affecting the shallow ground water in the upper Shoal Creek Basin with respect to nutrient concentrations and fecal bacteria densities. Statistical tests do not indicate that P wells sampled in spring 2000 have statistically larger concentrations of nitrite plus nitrate or fecal indicator bacteria densities than Ag wells sampled during the same time, at a 95-percent confidence level. Instead, the Ag wells had statistically larger concentrations of nitrite plus nitrate and fecal coliform bacteria densities than the P wells.The results of this study do not indicate seasonal variations from spring 2000 to fall 2000 in the concentrations of nutrients or fecal indicator bacteria densities from well samples. Statistical tests do not indicate statistically significant differences at a 95-percent confidence level for nitrite plus nitrate concentrations or fecal indicator bacteria densities between either P wells sampled in spring and fall 2000, or Ag wells sampled in spring and fall 2000. However, analysis of samples from springs shows that fecal streptococcus bacteria densities were statistically smaller in fall 2000 than in spring 2000 at a 95-percent confidence level.Nitrite plus nitrate concentrations in spring 2000 samples ranged from less than the detection level [0.02 mg/L (milligram per liter) as nitrogen] to 18 mg/L as nitrogen. Seven samples from three wells had nitrite plus nitrate concentrations at or larger than the maximum contaminant level (MCL) of 10 mg/L as nitrogen. The median nitrite plus nitrate concentrations were 0.28 mg/L as nitrogen for P1 samples, 4.6 mg/L as nitrogen for Ag1 samples, and 3.9 mg/L as nitrogen for Sp1 samples.Fecal coliform bacteria were detected in 1 of 25 P1 samples and 5 of 15 Ag1 samples. Escherichia coli (E. coli) bacteria were detected in 3 of 24 P1 samples and 1 of 13 Ag1 samples. Fecal streptococcus bacteria were detected in 8 of 25 P1 samples and 6 of 15 Ag1 samples. Bacteria densities in samples from wells ranged from less than 1 to 81 col/100 mL (colonies per 100 milliliters) of fecal coliform, less than 1 to 140 col/100 mL of E. coli, and less than 1 to 130 col/100 mL of fecal streptococcus. Fecal indicator bacteria densities in samples from springs were substantially larger than in samples from wells. In Sp1 samples, bacteria densities ranged from 12 to 3,300 col/100 mL of fecal coliform, 40 to 2,700 col/100 mL of E. coli, and 42 to 3,100 col/100 mL of fecal streptococcus.

  18. ON THE FOURIER AND WAVELET ANALYSIS OF CORONAL TIME SERIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auchère, F.; Froment, C.; Bocchialini, K.

    Using Fourier and wavelet analysis, we critically re-assess the significance of our detection of periodic pulsations in coronal loops. We show that the proper identification of the frequency dependence and statistical properties of the different components of the power spectra provides a strong argument against the common practice of data detrending, which tends to produce spurious detections around the cut-off frequency of the filter. In addition, the white and red noise models built into the widely used wavelet code of Torrence and Compo cannot, in most cases, adequately represent the power spectra of coronal time series, thus also possibly causingmore » false positives. Both effects suggest that several reports of periodic phenomena should be re-examined. The Torrence and Compo code nonetheless effectively computes rigorous confidence levels if provided with pertinent models of mean power spectra, and we describe the appropriate manner in which to call its core routines. We recall the meaning of the default confidence levels output from the code, and we propose new Monte-Carlo-derived levels that take into account the total number of degrees of freedom in the wavelet spectra. These improvements allow us to confirm that the power peaks that we detected have a very low probability of being caused by noise.« less

  19. On the Fourier and Wavelet Analysis of Coronal Time Series

    NASA Astrophysics Data System (ADS)

    Auchère, F.; Froment, C.; Bocchialini, K.; Buchlin, E.; Solomon, J.

    2016-07-01

    Using Fourier and wavelet analysis, we critically re-assess the significance of our detection of periodic pulsations in coronal loops. We show that the proper identification of the frequency dependence and statistical properties of the different components of the power spectra provides a strong argument against the common practice of data detrending, which tends to produce spurious detections around the cut-off frequency of the filter. In addition, the white and red noise models built into the widely used wavelet code of Torrence & Compo cannot, in most cases, adequately represent the power spectra of coronal time series, thus also possibly causing false positives. Both effects suggest that several reports of periodic phenomena should be re-examined. The Torrence & Compo code nonetheless effectively computes rigorous confidence levels if provided with pertinent models of mean power spectra, and we describe the appropriate manner in which to call its core routines. We recall the meaning of the default confidence levels output from the code, and we propose new Monte-Carlo-derived levels that take into account the total number of degrees of freedom in the wavelet spectra. These improvements allow us to confirm that the power peaks that we detected have a very low probability of being caused by noise.

  20. Colony-level assessment of Brucella and Leptospira in the Guadalupe fur seal, Isla Guadalupe, Mexico.

    PubMed

    Ziehl-Quirós, E Carolina; García-Aguilar, María C; Mellink, Eric

    2017-01-24

    The relatively small population size and restricted distribution of the Guadalupe fur seal Arctocephalus townsendi could make it highly vulnerable to infectious diseases. We performed a colony-level assessment in this species of the prevalence and presence of Brucella spp. and Leptospira spp., pathogenic bacteria that have been reported in several pinniped species worldwide. Forty-six serum samples were collected in 2014 from pups at Isla Guadalupe, the only place where the species effectively reproduces. Samples were tested for Brucella using 3 consecutive serological tests, and for Leptospira using the microscopic agglutination test. For each bacterium, a Bayesian approach was used to estimate prevalence to exposure, and an epidemiological model was used to test the null hypothesis that the bacterium was present in the colony. No serum sample tested positive for Brucella, and the statistical analyses concluded that the colony was bacterium-free with a 96.3% confidence level. However, a Brucella surveillance program would be highly recommendable. Twelve samples were positive (titers 1:50) to 1 or more serovars of Leptospira. The prevalence was calculated at 27.1% (95% credible interval: 15.6-40.3%), and the posterior analyses indicated that the colony was not Leptospira-free with a 100% confidence level. Serovars Icterohaemorrhagiae, Canicola, and Bratislava were detected, but only further research can unveil whether they affect the fur seal population.

  1. Evaluation of VIDAS UP Listeria assay (LPT) for the detection of Listeria in a variety of foods and environmental surfaces: First Action 2013.10.

    PubMed

    Crowley, Erin; Bird, Patrick; Flannery, Jonathan; Benzinger, M Joseph; Fisher, Kiel; Boyle, Megan; Huffman, Travis; Bastin, Ben; Bedinghaus, Paige; Judd, William; Hoang, Thao; Agin, James; Goins, David; Johnson, Ronald L

    2014-01-01

    The VIDAS UP Listeria (LPT) is an automated rapid screening enzyme phage-ligand based assay for the detection of Listeria species in human food products and environmental samples. The VIDAS LPT method was compared in a multi-laboratory collaborative study to AOAC Official Method 993.12 Listeria monocytogenes in Milk and Dairy Products reference method following current AOAC guidelines. A total of 14 laboratories participated, representing government and industry, throughout the United States. One matrix, queso fresco (soft Mexican cheese), was analyzed using two different test portion sizes, 25 and 125 g. Samples representing each test portion size were artificially contaminated with Listeria species at three levels, an uninoculated control level [0 colony-forming units (CFU)/test portion], a low-inoculum level (0.2-2 CFU/test portion), and a high-inoculum level (2-5 CFU/test portion). For this evaluation, 1800 unpaired replicate test portions were analyzed by either the VIDAS LPT or AOAC 993.12. Each inoculation level was analyzed using the Probability of Detection (POD) statistical model. For the low-level inoculated test portions, difference in collaborator POD (dLPOD) values of 0.01, (-0.10, 0.13), with 95% confidence intervals, were obtained for both 25 and 125 g test portions. The range of the confidence intervals for dLPOD values for both the 25 and 125 g test portions contains the point 0.0 indicating no statistically significant difference in the number of positive samples detected between the VIDAS LPT and the AOAC methods. In addition to Oxford agar, VIDAS LPT test portions were confirmed using Agar Listeria Ottavani and Agosti (ALOA), a proprietary chromogenic agar for the identification and differentiation of L. monocytogenes and Listeria species. No differences were observed between the two selective agars. The VIDAS LPT method, with the optional ALOA agar confirmation method, was adopted as Official First Action status for the detection of Listeria species in a variety of foods and environmental samples.

  2. Complementary Roles for Biomarkers of Biomechanical Strain ST2 and N-Terminal Prohormone B-Type Natriuretic Peptide in Patients With ST-Elevation Myocardial Infarction

    PubMed Central

    Sabatine, Marc S.; Morrow, David A.; Higgins, Luke J.; MacGillivray, Catherine; Guo, Wei; Bode, Christophe; Rifai, Nader; Cannon, Christopher P.; Gerszten, Robert E.; Lee, Richard T.

    2014-01-01

    Background ST2 is a member of the interleukin-1 receptor family with a soluble form that is markedly upregulated on application of biomechanical strain to cardiac myocytes. Circulating ST2 levels are elevated in the setting of acute myocardial infarction, but the predictive value of ST2 independent of traditional clinical factors and of an established biomarker of biomechanical strain, N-terminal prohormone B-type natriuretic peptide (NT-proBNP), has not been established. Methods and Results We measured ST2 at baseline in 1239 patients with ST-elevation myocardial infarction from the CLopidogrel as Adjunctive ReperfusIon TherapY–Thrombolysis in Myocardial Infarction 28 (CLARITY-TIMI 28) trial. Per trial protocol, patients were to undergo coronary angiography after 2 to 8 days and were followed up for 30 days for clinical events. In contrast to NT-proBNP, ST2 levels were independent of clinical factors potentially related to chronic increased left ventricular wall stress, including age, hypertension, prior myocardial infarction, and prior heart failure; levels also were only modestly correlated with NT-proBNP (r=0.14). After adjustment for baseline characteristics and NT-proBNP levels, an ST2 level above the median was associated with a significantly greater risk of cardiovascular death or heart failure (third quartile: adjusted odds ratio, 1.42; 95% confidence interval, 0.68 to 3.57; fourth quartile: adjusted odds ratio, 3.57; 95% confidence interval, 1.87 to 6.81; P<0.0001 for trend). When both ST2 and NT-proBNP were added to a model containing traditional clinical predictors, the c statistic significantly improved from 0.82 (95% confidence interval, 0.77 to 0.87) to 0.86 (95% confidence interval, 0.81 to 0.90) (P=0.017). Conclusions In ST-elevation myocardial infarction, high baseline ST2 levels are a significant predictor of cardiovascular death and heart failure independently of baseline characteristics and NT-proBNP, and the combination of ST2 and NT-proBNP significantly improves risk stratification. These data highlight the prognostic value of multiple, complementary biomarkers of biomechanical strain in ST-elevation myocardial infarction. PMID:18378613

  3. Statistical Modeling of Single Target Cell Encapsulation

    PubMed Central

    Moon, SangJun; Ceyhan, Elvan; Gurkan, Umut Atakan; Demirci, Utkan

    2011-01-01

    High throughput drop-on-demand systems for separation and encapsulation of individual target cells from heterogeneous mixtures of multiple cell types is an emerging method in biotechnology that has broad applications in tissue engineering and regenerative medicine, genomics, and cryobiology. However, cell encapsulation in droplets is a random process that is hard to control. Statistical models can provide an understanding of the underlying processes and estimation of the relevant parameters, and enable reliable and repeatable control over the encapsulation of cells in droplets during the isolation process with high confidence level. We have modeled and experimentally verified a microdroplet-based cell encapsulation process for various combinations of cell loading and target cell concentrations. Here, we explain theoretically and validate experimentally a model to isolate and pattern single target cells from heterogeneous mixtures without using complex peripheral systems. PMID:21814548

  4. Identification of structural damage using wavelet-based data classification

    NASA Astrophysics Data System (ADS)

    Koh, Bong-Hwan; Jeong, Min-Joong; Jung, Uk

    2008-03-01

    Predicted time-history responses from a finite-element (FE) model provide a baseline map where damage locations are clustered and classified by extracted damage-sensitive wavelet coefficients such as vertical energy threshold (VET) positions having large silhouette statistics. Likewise, the measured data from damaged structure are also decomposed and rearranged according to the most dominant positions of wavelet coefficients. Having projected the coefficients to the baseline map, the true localization of damage can be identified by investigating the level of closeness between the measurement and predictions. The statistical confidence of baseline map improves as the number of prediction cases increases. The simulation results of damage detection in a truss structure show that the approach proposed in this study can be successfully applied for locating structural damage even in the presence of a considerable amount of process and measurement noise.

  5. Colorimetric determination of nitrate plus nitrite in water by enzymatic reduction, automated discrete analyzer methods

    USGS Publications Warehouse

    Patton, Charles J.; Kryskalla, Jennifer R.

    2011-01-01

    In addition to operational details and performance benchmarks for these new DA-AtNaR2 nitrate + nitrite assays, this report also provides results of interference studies for common inorganic and organic matrix constituents at 1, 10, and 100 times their median concentrations in surface-water and groundwater samples submitted annually to the NWQL for nitrate + nitrite analyses. Paired t-test and Wilcoxon signed-rank statistical analyses of results determined by CFA-CdR methods and DA-AtNaR2 methods indicate that nitrate concentration differences between population means or sign ranks were either statistically equivalent to zero at the 95 percent confidence level (p ≥ 0.05) or analytically equivalent to zero-that is, when p < 0.05, concentration differences between population means or medians were less than MDLs.

  6. Reliability of void detection in structural ceramics using scanning laser acoustic microscopy

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Klima, S. J.; Kiser, J. D.; Baaklini, G. Y.

    1985-01-01

    The reliability of scanning laser acoustic microscopy (SLAM) for detecting surface voids in structural ceramic test specimens was statistically evaluated. Specimens of sintered silicon nitride and sintered silicon carbide, seeded with surface voids, were examined by SLAM at an ultrasonic frequency of 100 MHz in the as fired condition and after surface polishing. It was observed that polishing substantially increased void detectability. Voids as small as 100 micrometers in diameter were detected in polished specimens with 0.90 probability at a 0.95 confidence level. In addition, inspection times were reduced up to a factor of 10 after polishing. The applicability of the SLAM technique for detection of naturally occurring flaws of similar dimensions to the seeded voids is discussed. A FORTRAN program listing is given for calculating and plotting flaw detection statistics.

  7. Basic biostatistics for post-graduate students

    PubMed Central

    Dakhale, Ganesh N.; Hiware, Sachin K.; Shinde, Abhijit T.; Mahatme, Mohini S.

    2012-01-01

    Statistical methods are important to draw valid conclusions from the obtained data. This article provides background information related to fundamental methods and techniques in biostatistics for the use of postgraduate students. Main focus is given to types of data, measurement of central variations and basic tests, which are useful for analysis of different types of observations. Few parameters like normal distribution, calculation of sample size, level of significance, null hypothesis, indices of variability, and different test are explained in detail by giving suitable examples. Using these guidelines, we are confident enough that postgraduate students will be able to classify distribution of data along with application of proper test. Information is also given regarding various free software programs and websites useful for calculations of statistics. Thus, postgraduate students will be benefitted in both ways whether they opt for academics or for industry. PMID:23087501

  8. Total cross section for the γd-->π-pp reaction between 380 and 840 MeV

    NASA Astrophysics Data System (ADS)

    Asai, M.; Endo, I.; Harada, M.; Kasai, S.; Niki, K.; Sumi, Y.; Kato, S.; Maruyama, K.; Murata, Y.; Muto, M.; Yoshida, K.; Iwatani, K.; Hasai, H.; Ito, H.; Maki, T.; Rangacharyulu, C.; Shimizu, H.; Wada, Y.

    1990-09-01

    The total cross section for the γd-->π-pp reaction has been measured for incident photon energies from 380 to 840 MeV in steps of 10 MeV, with the best energy resolution attained so far. A large-acceptance detector was used to observe the reaction products. Overall uncertainties in the deduced cross sections are less than 9% (~4% statistical and ~8% systematic). The results are in excellent agreement with previous bubble chamber measurements and do not show any statistically significant structure which can be interpreted as evidence for the formation of dibaryon resonances. An upper limit at 95% confidence level of σpeakΓ<230 μb MeV is obtained for a resonance in the vicinity of photon energy 700 MeV (mass~2490 MeV).

  9. On intracluster Faraday rotation. II - Statistical analysis

    NASA Technical Reports Server (NTRS)

    Lawler, J. M.; Dennison, B.

    1982-01-01

    The comparison of a reliable sample of radio source Faraday rotation measurements seen through rich clusters of galaxies, with sources seen through the outer parts of clusters and therefore having little intracluster Faraday rotation, indicates that the distribution of rotation in the former population is broadened, but only at the 80% level of statistical confidence. Employing a physical model for the intracluster medium in which the square root of magnetic field strength/turbulent cell per gas core radius number ratio equals approximately 0.07 microgauss, a Monte Carlo simulation is able to reproduce the observed broadening. An upper-limit analysis figure of less than 0.20 microgauss for the field strength/turbulent cell ratio, combined with lower limits on field strength imposed by limitations on the Compton-scattered flux, shows that intracluster magnetic fields must be tangled on scales greater than about 20 kpc.

  10. Influence of a School-Based Cooking Course on Students' Food Preferences, Cooking Skills, and Confidence.

    PubMed

    Zahr, Rola; Sibeko, Lindiwe

    2017-03-01

    A quasi-experimental study was conducted to evaluate the influence of Project CHEF, a hands-on cooking and tasting program offered in Vancouver public schools, on students' food preferences, cooking skills, and confidence. Grade 4 and 5 students in an intervention group (n = 68) and a comparison group (n = 32) completed a survey at baseline and 2 to 3 weeks later. Students who participated in Project CHEF reported an increased familiarity and preference for the foods introduced through the program. This was statistically significant (P ≤ 0.05) for broccoli, swiss chard, carrots, and quinoa. A higher percentage of students exposed to Project CHEF reported a statistically significant increase (P ≤ 0.05) in: cutting vegetables and fruit (97% vs 81%), measuring ingredients (67% vs 44%), using a knife (94% vs 82%), and making a balanced meal on their own (69% vs 34%). They also reported a statistically significant increase (P ≤ 0.05) in confidence making the recipes introduced in the program: fruit salad (85% vs 81%), minestrone soup (25% vs 10%), and vegetable tofu stir fry (39% vs 26%). Involving students in hands-on cooking and tasting programs can increase their preferences for unpopular or unfamiliar foods and provide them with the skills and cooking confidence they need to prepare balanced meals.

  11. Significance testing - are we ready yet to abandon its use?

    PubMed

    The, Bertram

    2011-11-01

    Understanding of the damaging effects of significance testing has steadily grown. Reporting p values without dichotomizing the result to be significant or not, is not the solution. Confidence intervals are better, but are troubled by a non-intuitive interpretation, and are often misused just to see whether the null value lies within the interval. Bayesian statistics provide an alternative which solves most of these problems. Although criticized for relying on subjective models, the interpretation of a Bayesian posterior probability is more intuitive than the interpretation of a p value, and seems to be closest to intuitive patterns of human decision making. Another alternative could be using confidence interval functions (or p value functions) to display a continuum of intervals at different levels of confidence around a point estimate. Thus, better alternatives to significance testing exist. The reluctance to abandon this practice might be both preference of clinging to old habits as well as the unfamiliarity with better methods. Authors might question if using less commonly exercised, though superior, techniques will be well received by the editors, reviewers and the readership. A joint effort will be needed to abandon significance testing in clinical research in the future.

  12. An Inferential Confidence Interval Method of Establishing Statistical Equivalence that Corrects Tryon's (2001) Reduction Factor

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2008-01-01

    Evidence of group matching frequently takes the form of a nonsignificant test of statistical difference. Theoretical hypotheses of no difference are also tested in this way. These practices are flawed in that null hypothesis statistical testing provides evidence against the null hypothesis and failing to reject H[subscript 0] is not evidence…

  13. How will coastal sea level respond to changes in natural and anthropogenic forcings by 2100?

    NASA Astrophysics Data System (ADS)

    Jevrejeva, S.; Moore, J.; Grinsted, A.

    2010-12-01

    Sea level rise is perhaps the most damaging repercussion of global warming, as 150 million people live less than one meter above current high tides .Using an inverse statistical model we examine potential response in coastal sea level to the changes in natural and anthropogenic forcings by 2100. With six IPCC radiative forcing scenarios we estimate sea level rise of 0.6-1.6 m, with confidence limits of 0.59 m and 1.8 m. Projected impacts of solar and volcanic radiative forcings account only for, at maximum, 5% of total sea level rise, with anthropogenic greenhouse gasses being the dominant forcing. As alternatives to the IPCC projections, even the most intense century of volcanic forcing from the past 1000 years would result in 10-15 cm potential reduction of sea level rise. Stratospheric injections of SO2 equivalent to a Pinatubo eruption every 4 years would effectively just delay sea level rise by 12 -20 years.

  14. Plasma Levels of Fatty Acid-Binding Protein 4, Retinol-Binding Protein 4, High-Molecular-Weight Adiponectin, and Cardiovascular Mortality Among Men With Type 2 Diabetes: A 22-Year Prospective Study.

    PubMed

    Liu, Gang; Ding, Ming; Chiuve, Stephanie E; Rimm, Eric B; Franks, Paul W; Meigs, James B; Hu, Frank B; Sun, Qi

    2016-11-01

    To examine select adipokines, including fatty acid-binding protein 4, retinol-binding protein 4, and high-molecular-weight (HMW) adiponectin in relation to cardiovascular disease (CVD) mortality among patients with type 2 diabetes mellitus. Plasma levels of fatty acid-binding protein 4, retinol-binding protein 4, and HMW adiponectin were measured in 950 men with type 2 diabetes mellitus in the Health Professionals Follow-up Study. After an average of 22 years of follow-up (1993-2015), 580 deaths occurred, of whom 220 died of CVD. After multivariate adjustment for covariates, higher levels of fatty acid-binding protein 4 were significantly associated with a higher CVD mortality: comparing extreme tertiles, the hazard ratio and 95% confidence interval of CVD mortality was 1.78 (1.22-2.59; P trend=0.001). A positive association was also observed for HMW adiponectin: the hazard ratio (95% confidence interval) was 2.07 (1.42-3.06; P trend=0.0002), comparing extreme tertiles, whereas higher retinol-binding protein 4 levels were nonsignificantly associated with a decreased CVD mortality with an hazard ratio (95% confidence interval) of 0.73 (0.50-1.07; P trend=0.09). A Mendelian randomization analysis suggested that the causal relationships of HMW adiponectin and retinol-binding protein 4 would be directionally opposite to those observed based on the biomarkers, although none of the Mendelian randomization associations achieved statistical significance. These data suggest that higher levels of fatty acid-binding protein 4 and HMW adiponectin are associated with elevated CVD mortality among men with type 2 diabetes mellitus. Biological mechanisms underlying these observations deserve elucidation, but the associations of HMW adiponectin may partially reflect altered adipose tissue functionality among patients with type 2 diabetes mellitus. © 2016 American Heart Association, Inc.

  15. Serum lipids in hypothyroidism: Our experience.

    PubMed

    Prakash, Archana; Lal, Ashok Kumar

    2006-09-01

    In order to determine whether the screening of lipid profile is justified in patients with hypothyroidism we estimated serum lipids in cases having different levels of serum TSH. 60 patients of hypothyroidism in the age group of 20 to 60 yrs were studied for thyroid profile over a period of one year. On the basis of serum TSH level the cases were divided into three groups: In the first group TSH concentration was 8.8±2.99 μlU/ml, 95% confidence interval (Cl) 8.8±1.07, whereas serum total cholesterol and LDL-chol levels were 196±37.22 and 126±29.17 mg/dl respectively. The statistical analysis of these two groups showed a significant correlation between raised TSH levels and serum total cholesterol and LDL-chol (P<0.05 & P<0.01) respectively. We conclude that hypothyrodism is associated with changes in lipid profile.

  16. Patient confidence regarding secondary lifestyle modification and knowledge of ‘heart attack’ symptoms following percutaneous revascularisation in Japan: a cross-sectional study

    PubMed Central

    Kitakata, Hiroki; Kohno, Takashi; Kohsaka, Shun; Fujino, Junko; Nakano, Naomi; Fukuoka, Ryoma; Yuasa, Shinsuke; Maekawa, Yuichiro; Fukuda, Keiichi

    2018-01-01

    Objective To assess patient perspectives on secondary lifestyle modification and knowledge of ‘heart attack’ after percutaneous coronary intervention (PCI) for coronary artery disease (CAD). Design Observational cross-sectional study. Setting A single university-based hospital centre in Japan. Participants In total, 236 consecutive patients with CAD who underwent PCI completed a questionnaire (age, 67.4±10.1 years; women, 14.8%; elective PCI, 75.4%). The survey questionnaire included questions related to confidence levels about (1) lifestyle modification at the time of discharge and (2) appropriate recognition of heart attack symptoms and reactions to these symptoms on a four-point Likert scale (1=not confident to 4=completely confident). Primary outcome measure The primary outcome assessed was the patients’ confidence level regarding lifestyle modification and the recognition of heart attack symptoms. Results Overall, patients had a high level of confidence (confident or completely confident,>75%) about smoking cessation, alcohol restriction and medication adherence. However, they had a relatively low level of confidence (<50%) about the maintenance of blood pressure control, healthy diet, body weight and routine exercise (≥3 times/week). After adjustment, male sex (OR 3.61, 95% CI 1.11 to 11.8) and lower educational level (OR 3.25; 95% CI 1.70 to 6.23) were identified as factors associated with lower confidence levels. In terms of confidence in the recognition of heart attack, almost all respondents answered ‘yes’ to the item ‘I should go to the hospital as soon as possible when I have a heart attack’; however, only 28% of the responders were confident in their ability to distinguish between heart attack symptoms and other conditions. Conclusions There were substantial disparities in the confidence levels associated with lifestyle modification and recognition/response to heart attack. These gaps need to be studied further and disseminated to improve cardiovascular care. PMID:29549203

  17. A method for modeling co-occurrence propensity of clinical codes with application to ICD-10-PCS auto-coding.

    PubMed

    Subotin, Michael; Davis, Anthony R

    2016-09-01

    Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. The use of a physiologically-based extraction test to assess relationships between bioaccessible metals in urban soil and neurodevelopmental conditions in children.

    PubMed

    Hong, Jie; Wang, Yinding; McDermott, Suzanne; Cai, Bo; Aelion, C Marjorie; Lead, Jamie

    2016-05-01

    Intellectual disability (ID) and cerebral palsy (CP) are serious neurodevelopment conditions and low birth weight (LBW) is correlated with both ID and CP. The actual causes and mechanisms for each of these child outcomes are not well understood. In this study, the relationship between bioaccessible metal concentrations in urban soil and these child conditions were investigated. A physiologically based extraction test (PBET) mimicking gastric and intestinal processes was applied to measure the bio-accessibility of four metals (cadmium (Cd), chromium (Cr), nickel (Ni), and lead (Pb)) in urban soil, and a Bayesian Kriging method was used to estimate metal concentrations in geocoded maternal residential sites. The results showed that bioaccessible metal concentrations of Cd, Ni, and Pb in the intestinal phase were statistically significantly associated with the child outcomes. Lead and nickel were associated with ID, lead and cadmium was associated with LBW, and cadmium was associated with CP. The total concentrations and stomach concentrations were not correlated to significant effects in any of the analyses. For lead, an estimated threshold value was found that was statistically significant in predicting low birth weight. The change point test was statistically significant (p value = 0.045) at an intestine threshold level of 9.2 mg/kg (95% confidence interval 8.9-9.4, p value = 0.0016), which corresponds to 130.6 mg/kg of total Pb concentration in the soil. This is a narrow confidence interval for an important relationship. Published by Elsevier Ltd.

  19. Methylenetetrahydrofolate reductase polymorphisms, serum methylenetetrahydrofolate reductase levels, and risk of childhood acute lymphoblastic leukemia in a Chinese population.

    PubMed

    Tong, Na; Fang, Yongjun; Li, Jie; Wang, Meilin; Lu, Qin; Wang, Shizhi; Tian, Yuanyuan; Rong, Liucheng; Sun, Jielin; Xu, Jianfeng; Zhang, Zhengdong

    2010-03-01

    Methylenetetrahydrofolate reductase (MTHFR), involved in DNA methylation and nucleotide synthesis, is thought to be associated with a decreased risk of adult and childhood acute lymphoblastic leukemia (ALL). Accumulating evidence has indicated that two common genetic variants, C677T and A1298C, are associated with cancer risk. We hypothesized that these two variants were associated with childhood ALL susceptibility and influence serum MTHFR levels. We genotyped these two polymorphisms and detected MTHFR levels in a case-control study of 361 cases and 508 controls. Compared with the 677CC and 677CC/CT genotypes, the 677TT genotype was associated with a statistically significantly decreased risk of childhood ALL (odds ratio = 0.53, 95% confidence interval = 0.32-0.88, and odds ratio = 0.55, 95% confidence interval = 0.35-0.88, respectively). In addition, a pronounced reduced risk of ALL was observed among low-risk ALL and B-phenotype ALL. Moreover, the mean serum MTHFR level was 8.01 ng/mL (+/-4.38) in cases and 9.27 ng/mL (+/-4.80) in controls (P < 0.001). MTHFR levels in subjects with 677TT genotype was significantly higher than those with 677CC genotype (P = 0.010) or 677CT genotype (P = 0.043) in controls. In conclusion, our results provide evidence that the MTHFR polymorphisms might contribute to reduced childhood ALL risk in this population.

  20. Detection of simulated microcalcifications in fixed mammary tissue: An ROC study of the effect of local versus global histogram equalization.

    PubMed

    Sund, T; Olsen, J B

    2006-09-01

    To investigate whether sliding window adaptive histogram equalization (SWAHE) of digital mammograms improves the detection of simulated calcifications, as compared to images normalized by global histogram equalization (GHE). Direct digital mammograms were obtained from mammary tissue phantoms superimposed with different frames. Each frame was divided into forty squares by a wire mesh, and contained granular calcifications randomly positioned in about 50% of the squares. Three radiologists read the mammograms on a display monitor. They classified their confidence in the presence of microcalcifications in each square on a scale of 1 to 5. Images processed with GHE were first read and used as a reference. In a later session, the same images processed with SWAHE were read. The results were compared using ROC methodology. When the total areas AZ were compared, the results were completely equivocal. When comparing the high-specificity partial ROC area AZ,0.2 below false-positive fraction (FPF) 0.20, two of the three observers performed best with the images processed with SWAHE. The difference was not statistically significant. When the reader's confidence threshold in malignancy is set at a high level, increasing the contrast of mammograms with SWAHE may enhance the visibility of microcalcifications without adversely affecting the false-positive rate. When the reader's confidence threshold is set at a low level, the effect of SWAHE is an increase of false positives. Further investigation is needed to confirm the validity of the conclusions.

  1. Science Achievement and Students' Self-Confidence and Interest in Science: A Taiwanese Representative Sample Study

    ERIC Educational Resources Information Center

    Chang, Chun-Yen; Cheng, Wei-Ying

    2008-01-01

    The interrelationship between senior high school students' science achievement (SA) and their self-confidence and interest in science (SCIS) was explored with a representative sample of approximately 1,044 11th-grade students from 30 classes attending four high schools throughout Taiwan. Statistical analyses indicated that a statistically…

  2. Estimating Standardized Linear Contrasts of Means with Desired Precision

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2009-01-01

    L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of…

  3. Statistical analysis of the El Niño-Southern Oscillation and sea-floor seismicity in the eastern tropical Pacific.

    PubMed

    Guillas, Serge; Day, Simon J; McGuire, B

    2010-05-28

    We present statistical evidence for a temporal link between variations in the El Niño-Southern Oscillation (ENSO) and the occurrence of earthquakes on the East Pacific Rise (EPR). We adopt a zero-inflated Poisson regression model to represent the relationship between the number of earthquakes in the Easter microplate on the EPR and ENSO (expressed using the southern oscillation index (SOI) for east Pacific sea-level pressure anomalies) from February 1973 to February 2009. We also examine the relationship between the numbers of earthquakes and sea levels, as retrieved by Topex/Poseidon from October 1992 to July 2002. We observe a significant (95% confidence level) positive influence of SOI on seismicity: positive SOI values trigger more earthquakes over the following 2 to 6 months than negative SOI values. There is a significant negative influence of absolute sea levels on seismicity (at 6 months lag). We propose that increased seismicity is associated with ENSO-driven sea-surface gradients (rising from east to west) in the equatorial Pacific, leading to a reduction in ocean-bottom pressure over the EPR by a few kilopascal. This relationship is opposite to reservoir-triggered seismicity and suggests that EPR fault activity may be triggered by plate flexure associated with the reduced pressure.

  4. On the log-normality of historical magnetic-storm intensity statistics: implications for extreme-event probabilities

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete

    2015-01-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.

  5. An empirical Bayes method for updating inferences in analysis of quantitative trait loci using information from related genome scans.

    PubMed

    Zhang, Kui; Wiener, Howard; Beasley, Mark; George, Varghese; Amos, Christopher I; Allison, David B

    2006-08-01

    Individual genome scans for quantitative trait loci (QTL) mapping often suffer from low statistical power and imprecise estimates of QTL location and effect. This lack of precision yields large confidence intervals for QTL location, which are problematic for subsequent fine mapping and positional cloning. In prioritizing areas for follow-up after an initial genome scan and in evaluating the credibility of apparent linkage signals, investigators typically examine the results of other genome scans of the same phenotype and informally update their beliefs about which linkage signals in their scan most merit confidence and follow-up via a subjective-intuitive integration approach. A method that acknowledges the wisdom of this general paradigm but formally borrows information from other scans to increase confidence in objectivity would be a benefit. We developed an empirical Bayes analytic method to integrate information from multiple genome scans. The linkage statistic obtained from a single genome scan study is updated by incorporating statistics from other genome scans as prior information. This technique does not require that all studies have an identical marker map or a common estimated QTL effect. The updated linkage statistic can then be used for the estimation of QTL location and effect. We evaluate the performance of our method by using extensive simulations based on actual marker spacing and allele frequencies from available data. Results indicate that the empirical Bayes method can account for between-study heterogeneity, estimate the QTL location and effect more precisely, and provide narrower confidence intervals than results from any single individual study. We also compared the empirical Bayes method with a method originally developed for meta-analysis (a closely related but distinct purpose). In the face of marked heterogeneity among studies, the empirical Bayes method outperforms the comparator.

  6. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    PubMed

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Task shifting to clinical officer-led echocardiography screening for detecting rheumatic heart disease in Malawi, Africa.

    PubMed

    Sims Sanyahumbi, Amy; Sable, Craig A; Karlsten, Melissa; Hosseinipour, Mina C; Kazembe, Peter N; Minard, Charles G; Penny, Daniel J

    2017-08-01

    Echocardiographic screening for rheumatic heart disease in asymptomatic children may result in early diagnosis and prevent progression. Physician-led screening is not feasible in Malawi. Task shifting to mid-level providers such as clinical officers may enable more widespread screening. Hypothesis With short-course training, clinical officers can accurately screen for rheumatic heart disease using focussed echocardiography. A total of eight clinical officers completed three half-days of didactics and 2 days of hands-on echocardiography training. Clinical officers were evaluated by performing screening echocardiograms on 20 children with known rheumatic heart disease status. They indicated whether children should be referred for follow-up. Referral was indicated if mitral regurgitation measured more than 1.5 cm or there was any measurable aortic regurgitation. The κ statistic was calculated to measure referral agreement with a paediatric cardiologist. Sensitivity and specificity were estimated using a generalised linear mixed model, and were calculated on the basis of World Heart Federation diagnostic criteria. The mean κ statistic comparing clinical officer referrals with the paediatric cardiologist was 0.72 (95% confidence interval: 0.62, 0.82). The κ value ranged from a minimum of 0.57 to a maximum of 0.90. For rheumatic heart disease diagnosis, sensitivity was 0.91 (95% confidence interval: 0.86, 0.95) and specificity was 0.65 (95% confidence interval: 0.57, 0.72). There was substantial agreement between clinical officers and paediatric cardiologists on whether to refer. Clinical officers had a high sensitivity in detecting rheumatic heart disease. With short-course training, clinical officer-led echo screening for rheumatic heart disease is a viable alternative to physician-led screening in resource-limited settings.

  8. Reliability of scanning laser acoustic microscopy for detecting internal voids in structural ceramics

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Baaklini, G. Y.

    1986-01-01

    The reliability of 100 MHz scanning laser acoustic microscopy (SLAM) for detecting internal voids in sintered specimens of silicon nitride and silicon carbide was evaluated. The specimens contained artificially implanted voids and were positioned at depths ranging up to 2 mm below the specimen surface. Detection probability of 0.90 at a 0.95 confidence level was determined as a function of material, void diameter, and void depth. The statistical results presented for void detectability indicate some of the strengths and limitations of SLAM as a nondestructive evaluation technique for structural ceramics.

  9. Area estimation using multiyear designs and partial crop identification

    NASA Technical Reports Server (NTRS)

    Sielken, R. L., Jr.

    1984-01-01

    Statistical procedures were developed for large area assessments using both satellite and conventional data. Crop acreages, other ground cover indices, and measures of change were the principal characteristics of interest. These characteristics are capable of being estimated from samples collected possibly from several sources at varying times, with different levels of identification. Multiyear analysis techniques were extended to include partially identified samples; the best current year sampling design corresponding to a given sampling history was determined; weights reflecting the precision or confidence in each observation were identified and utilized, and the variation in estimates incorporating partially identified samples were quantified.

  10. Solar cosmic ray hazard to interplanetary and earth-orbital space travel

    NASA Technical Reports Server (NTRS)

    Yucker, W. R.

    1972-01-01

    A statistical treatment of the radiation hazards to astronauts due to solar cosmic ray protons is reported to determine shielding requirements for solar proton events. More recent data are incorporated into the present analysis in order to improve the accuracy of the predicted mission fluence and dose. The effects of the finite data sample are discussed. Mission fluence and dose versus shield thickness data are presented for mission lengths up to 3 years during periods of maximum and minimum solar activity; these correspond to various levels of confidence that the predicted hazard will not be exceeded.

  11. Underwater Sound Propagation Modeling Methods for Predicting Marine Animal Exposure.

    PubMed

    Hamm, Craig A; McCammon, Diana F; Taillefer, Martin L

    2016-01-01

    The offshore exploration and production (E&P) industry requires comprehensive and accurate ocean acoustic models for determining the exposure of marine life to the high levels of sound used in seismic surveys and other E&P activities. This paper reviews the types of acoustic models most useful for predicting the propagation of undersea noise sources and describes current exposure models. The severe problems caused by model sensitivity to the uncertainty in the environment are highlighted to support the conclusion that it is vital that risk assessments include transmission loss estimates with statistical measures of confidence.

  12. Evaluating image reconstruction methods for tumor detection performance in whole-body PET oncology imaging

    NASA Astrophysics Data System (ADS)

    Lartizien, Carole; Kinahan, Paul E.; Comtat, Claude; Lin, Michael; Swensson, Richard G.; Trebossen, Regine; Bendriem, Bernard

    2000-04-01

    This work presents initial results from observer detection performance studies using the same volume visualization software tools that are used in clinical PET oncology imaging. Research into the FORE+OSEM and FORE+AWOSEM statistical image reconstruction methods tailored to whole- body 3D PET oncology imaging have indicated potential improvements in image SNR compared to currently used analytic reconstruction methods (FBP). To assess the resulting impact of these reconstruction methods on the performance of human observers in detecting and localizing tumors, we use a non- Monte Carlo technique to generate multiple statistically accurate realizations of 3D whole-body PET data, based on an extended MCAT phantom and with clinically realistic levels of statistical noise. For each realization, we add a fixed number of randomly located 1 cm diam. lesions whose contrast is varied among pre-calibrated values so that the range of true positive fractions is well sampled. The observer is told the number of tumors and, similar to the AFROC method, asked to localize all of them. The true positive fraction for the three algorithms (FBP, FORE+OSEM, FORE+AWOSEM) as a function of lesion contrast is calculated, although other protocols could be compared. A confidence level for each tumor is also recorded for incorporation into later AFROC analysis.

  13. Search for pair-produced resonances each decaying into at least four quarks in proton-proton collisions at $$\\sqrt{s}=$$ 13 TeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sirunyan, Albert M; et al.

    This letter presents the results of a search for pair-produced particles of masses above 100 GeV that each decay into at least four quarks. Using data collected by the CMS experiment at the LHC in 2015-2016, corresponding to an integrated luminosity of 38.2 fbmore » $$^{-1}$$, reconstructed particles are clustered into two large jets of similar mass, each consistent with four-parton substructure. No statistically significant excess of data over the background prediction is observed in the distribution of average jet mass. Pair-produced squarks with dominant hadronic $R$-parity-violating decays into four quarks and with masses between 0.10 and 0.72 TeV are excluded at 95% confidence level. Similarly, pair-produced gluinos that decay into five quarks are also excluded with masses between 0.10 and 1.41 TeV at 95% confidence level. These are the first constraints that have been placed on pair-produced particles with masses below 400 GeV that decay into four or five quarks, bridging a significant gap in the coverage of $R$-parity-violating supersymmetry parameter space.« less

  14. Net Weight Issue LLNL DOE-STD-3013 Containers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilk, P

    2008-01-16

    The following position paper will describe DOE-STD-3013 container sets No.L000072 and No.L000076, and how they are compliant with DOE-STD-3013-2004. All masses of accountable nuclear materials are measured on LLNL certified balances maintained under an MC&A Program approved by DOE/NNSA LSO. All accountability balances are recalibrated annually and checked to be within calibration on each day that the balance is used for accountability purposes. A statistical analysis of the historical calibration checks from the last seven years indicates that the full-range Limit of Error (LoE, 95% confidence level) for the balance used to measure the mass of the contents of themore » above indicated 3013 containers is 0.185 g. If this error envelope, at the 95% confidence level, were to be used to generate an upper-limit to the measured weight of the containers No.L000072 and No.L000076, the error-envelope would extend beyond the 5.0 kg 3013-standard limit on the package contents by less than 0.3 g. However, this is still well within the intended safety bounds of DOE-STD-3013-2004.« less

  15. The intricate Galaxy disk: velocity asymmetries in Gaia-TGAS

    NASA Astrophysics Data System (ADS)

    Antoja, T.; de Bruijne, J.; Figueras, F.; Mor, R.; Prusti, T.; Roca-Fàbrega, S.

    2017-06-01

    We use Gaia-TGAS data to compare the transverse velocities in Galactic longitude (coming from proper motions and parallaxes) in the Milky Way disk for negative and positive longitudes as a function of distance. The transverse velocities are strongly asymmetric and deviate significantly from the expectations for an axisymmetric galaxy. The value and sign of the asymmetry changes at spatial scales of several tens of degrees in Galactic longitude and about 0.5 kpc in distance. The asymmetry is statistically significant at 95% confidence level for 57% of the region probed, which extends up to 1.2 kpc. A percentage of 24% of the region shows absolute differences at this confidence level larger than 5 km s-1 and 7% larger than 10 km s-1. The asymmetry pattern shows mild variations in the vertical direction and with stellar type. A first qualitative comparison with spiral arm models indicates that the arms are probably not the main source of the asymmetry. We briefly discuss alternative origins. This is the first time that global all-sky asymmetries are detected in the Milky Way kinematics beyond the local neighbourhood and with a purely astrometric sample.

  16. Using Context Variety and Students' Discussions in Recognizing Statistical Situations

    ERIC Educational Resources Information Center

    Silva, José Luis Ángel Rodríguez; Aguilar, Mario Sánchez

    2016-01-01

    We present a proposal for helping students to cope with statistical word problems related to the classification of different cases of confidence intervals. The proposal promotes an environment where students can explicitly discuss the reasons underlying their classification of cases.

  17. Impact of confidence number on accuracy of the SureSight Vision Screener.

    PubMed

    2010-02-01

    To assess the relation between the confidence number provided by the Welch Allyn SureSight Vision Screener and screening accuracy, and to determine whether repeated testing to achieve a higher confidence number improves screening accuracy in pre-school children. Lay and nurse screeners screened 1452 children enrolled in the Vision in Preschoolers (VIP) Phase II Study. All children also underwent a comprehensive eye examination. By using statistical comparison of proportions, we examined sensitivity and specificity for detecting any ocular condition targeted for detection in the VIP study and conditions grouped by severity and by type (amblyopia, strabismus, significant refractive error, and unexplained decreased visual acuity) among children who had confidence numbers < or =4 (retest necessary), 5 (retest if possible), > or =6 (acceptable). Among the 687 (47.3%) children who had repeated testing by either lay or nurse screeners because of a low confidence number (<6) for one or both eyes in the initial testing, the same analyses were also conducted to compare results between the initial reading and repeated test reading with the highest confidence number in the same child. These analyses were based on the failure criteria associated with 90% specificity for detecting any VIP condition in VIP Phase II. A lower confidence number category were associated with higher sensitivity (0.71, 0.65, and 0.59 for < or =4, 5, and > or =6, respectively, p = 0.04) but no statistical difference in specificity (0.85, 0.85, and 0.91, p = 0.07) of detecting any VIP-targeted condition. Children with any VIP-targeted condition were as likely to be detected using the initial confidence number reading as using the higher confidence number reading from repeated testing. A higher confidence number obtained during screening with the SureSight Vision Screener is not associated with better screening accuracy. Repeated testing to reach the manufacturer's recommended minimum value is not helpful in pre-school vision screening.

  18. Approximation of Confidence Limits on Sample Semivariograms From Single Realizations of Spatially Correlated Random Fields

    NASA Astrophysics Data System (ADS)

    Shafer, J. M.; Varljen, M. D.

    1990-08-01

    A fundamental requirement for geostatistical analyses of spatially correlated environmental data is the estimation of the sample semivariogram to characterize spatial correlation. Selecting an underlying theoretical semivariogram based on the sample semivariogram is an extremely important and difficult task that is subject to a great deal of uncertainty. Current standard practice does not involve consideration of the confidence associated with semivariogram estimates, largely because classical statistical theory does not provide the capability to construct confidence limits from single realizations of correlated data, and multiple realizations of environmental fields are not found in nature. The jackknife method is a nonparametric statistical technique for parameter estimation that may be used to estimate the semivariogram. When used in connection with standard confidence procedures, it allows for the calculation of closely approximate confidence limits on the semivariogram from single realizations of spatially correlated data. The accuracy and validity of this technique was verified using a Monte Carlo simulation approach which enabled confidence limits about the semivariogram estimate to be calculated from many synthetically generated realizations of a random field with a known correlation structure. The synthetically derived confidence limits were then compared to jackknife estimates from single realizations with favorable results. Finally, the methodology for applying the jackknife method to a real-world problem and an example of the utility of semivariogram confidence limits were demonstrated by constructing confidence limits on seasonal sample variograms of nitrate-nitrogen concentrations in shallow groundwater in an approximately 12-mi2 (˜30 km2) region in northern Illinois. In this application, the confidence limits on sample semivariograms from different time periods were used to evaluate the significance of temporal change in spatial correlation. This capability is quite important as it can indicate when a spatially optimized monitoring network would need to be reevaluated and thus lead to more robust monitoring strategies.

  19. Optimized lower leg injury probability curves from post-mortem human subject tests under axial impacts

    PubMed Central

    Yoganandan, Narayan; Arun, Mike W.J.; Pintar, Frank A.; Szabo, Aniko

    2015-01-01

    Objective Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. Methods The study re-examined lower leg PMHS data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and non-injury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the co-variable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal and log-logistic distributions was based on the Akaike Information Criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. Results The mean age, stature and weight: 58.2 ± 15.1 years, 1.74 ± 0.08 m and 74.9 ± 13.8 kg. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other two distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-old at five, 25 and 50% risk levels age groups for lower leg fracture. For 25, 45 and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. Conclusions This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines. PMID:25307381

  20. Program for Weibull Analysis of Fatigue Data

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2005-01-01

    A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.

  1. Laharz_py: GIS tools for automated mapping of lahar inundation hazard zones

    USGS Publications Warehouse

    Schilling, Steve P.

    2014-01-01

    Laharz_py is written in the Python programming language as a suite of tools for use in ArcMap Geographic Information System (GIS). Primarily, Laharz_py is a computational model that uses statistical descriptions of areas inundated by past mass-flow events to forecast areas likely to be inundated by hypothetical future events. The forecasts use physically motivated and statistically calibrated power-law equations that each has a form A = cV2/3, relating mass-flow volume (V) to planimetric or cross-sectional areas (A) inundated by an average flow as it descends a given drainage. Calibration of the equations utilizes logarithmic transformation and linear regression to determine the best-fit values of c. The software uses values of V, an algorithm for idenitifying mass-flow source locations, and digital elevation models of topography to portray forecast hazard zones for lahars, debris flows, or rock avalanches on maps. Laharz_py offers two methods to construct areas of potential inundation for lahars: (1) Selection of a range of plausible V values results in a set of nested hazard zones showing areas likely to be inundated by a range of hypothetical flows; and (2) The user selects a single volume and a confidence interval for the prediction. In either case, Laharz_py calculates the mean expected A and B value from each user-selected value of V. However, for the second case, a single value of V yields two additional results representing the upper and lower values of the confidence interval of prediction. Calculation of these two bounding predictions require the statistically calibrated prediction equations, a user-specified level of confidence, and t-distribution statistics to calculate the standard error of regression, standard error of the mean, and standard error of prediction. The portrayal of results from these two methods on maps compares the range of inundation areas due to prediction uncertainties with uncertainties in selection of V values. The Open-File Report document contains an explanation of how to install and use the software. The Laharz_py software includes an example data set for Mount Rainier, Washington. The second part of the documentation describes how to use all of the Laharz_py tools in an example dataset at Mount Rainier, Washington.

  2. A classification of the galaxy groups

    NASA Technical Reports Server (NTRS)

    Anosova, Joanna P.

    1990-01-01

    A statistical criterion has been proposed to reveal the random and physical clusterings among stars, galaxies and other objects. This criterion has been applied to the galaxy triples of the list by Karachentseva, Karaschentsev and Scherbanovsky, and the double galaxies of the list by Dahari where the primary components are the Seyfert galaxies. The confident physical, probable physical, probable optical and confident optical groups have been identified. The limit difference of radial velocities of components for the confident physical multiple galaxies has also been estimated.

  3. Statistical wave climate projections for coastal impact assessments

    NASA Astrophysics Data System (ADS)

    Camus, P.; Losada, I. J.; Izaguirre, C.; Espejo, A.; Menéndez, M.; Pérez, J.

    2017-09-01

    Global multimodel wave climate projections are obtained at 1.0° × 1.0° scale from 30 Coupled Model Intercomparison Project Phase 5 (CMIP5) global circulation model (GCM) realizations. A semi-supervised weather-typing approach based on a characterization of the ocean wave generation areas and the historical wave information from the recent GOW2 database are used to train the statistical model. This framework is also applied to obtain high resolution projections of coastal wave climate and coastal impacts as port operability and coastal flooding. Regional projections are estimated using the collection of weather types at spacing of 1.0°. This assumption is feasible because the predictor is defined based on the wave generation area and the classification is guided by the local wave climate. The assessment of future changes in coastal impacts is based on direct downscaling of indicators defined by empirical formulations (total water level for coastal flooding and number of hours per year with overtopping for port operability). Global multimodel projections of the significant wave height and peak period are consistent with changes obtained in previous studies. Statistical confidence of expected changes is obtained due to the large number of GCMs to construct the ensemble. The proposed methodology is proved to be flexible to project wave climate at different spatial scales. Regional changes of additional variables as wave direction or other statistics can be estimated from the future empirical distribution with extreme values restricted to high percentiles (i.e., 95th, 99th percentiles). The statistical framework can also be applied to evaluate regional coastal impacts integrating changes in storminess and sea level rise.

  4. Assessing Threat Detection Scenarios through Hypothesis Generation and Testing

    DTIC Science & Technology

    2015-12-01

    Publications. Field, A. (2005). Discovering statistics using SPSS (2nd ed.). Thousand Oaks, CA: Sage Publications. Fisher, S. D., Gettys, C. F...therefore, subsequent F statistics are reported using the Huynh-Feldt correction (Greenhouse-Geisser Epsilon > .775). Experienced and inexperienced...change in hypothesis using experience and initial confidence as predictors. In the Dog Day scenario, the regression was not statistically

  5. The CASE Project: Evaluation of Case-Based Approaches to Learning and Teaching in Statistics Service Courses

    ERIC Educational Resources Information Center

    Fawcett, Lee

    2017-01-01

    The CASE project (Case-based Approaches to Statistics Education; see www.mas.ncl.ac.uk/~nlf8/innovation) was established to investigate how the use of real-life, discipline-specific case study material in Statistics service courses could improve student engagement, motivation, and confidence. Ultimately, the project aims to promote deep learning…

  6. Probabilistic reasoning under time pressure: an assessment in Italian, Spanish and English psychology undergraduates

    NASA Astrophysics Data System (ADS)

    Agus, M.; Hitchcott, P. K.; Penna, M. P.; Peró-Cebollero, M.; Guàrdia-Olmos, J.

    2016-11-01

    Many studies have investigated the features of probabilistic reasoning developed in relation to different formats of problem presentation, showing that it is affected by various individual and contextual factors. Incomplete understanding of the identity and role of these factors may explain the inconsistent evidence concerning the effect of problem presentation format. Thus, superior performance has sometimes been observed for graphically, rather than verbally, presented problems. The present study was undertaken to address this issue. Psychology undergraduates without any statistical expertise (N = 173 in Italy; N = 118 in Spain; N = 55 in England) were administered statistical problems in two formats (verbal-numerical and graphical-pictorial) under a condition of time pressure. Students also completed additional measures indexing several potentially relevant individual dimensions (statistical ability, statistical anxiety, attitudes towards statistics and confidence). Interestingly, a facilitatory effect of graphical presentation was observed in the Italian and Spanish samples but not in the English one. Significantly, the individual dimensions predicting statistical performance also differed between the samples, highlighting a different role of confidence. Hence, these findings confirm previous observations concerning problem presentation format while simultaneously highlighting the importance of individual dimensions.

  7. Communication partner training for health care professionals in an inpatient rehabilitation setting: A parallel randomised trial.

    PubMed

    Heard, Renee; O'Halloran, Robyn; McKinley, Kathryn

    2017-06-01

    The purpose of this study is to determine if the E-Learning Plus communication partner training (CPT) programme is as effective as the Supported Conversation for Adults with Aphasia (SCA TM ) CPT programme in improving healthcare professionals' confidence and knowledge communicating with patients with aphasia. Forty-eight healthcare professionals working in inpatient rehabilitation participated. Participants were randomised to one of the CPT programmes. The three outcome measures were self-rating of confidence, self-rating of knowledge and a test of knowledge of aphasia. Measures were taken pre-, immediately post- and 3-4 months post-training. Data were analysed using mixed between within ANOVAs. Homogeneity of variance was adequate for self-rating of confidence and test of knowledge of aphasia data to continue analysis. There was a statistically significant difference in self-rating of confidence and knowledge of aphasia for both interventions across time. No statistically significant difference was found between the two interventions. Both CPT interventions were associated with an increase in health care professionals' confidence and knowledge of aphasia, but neither programme was superior. As the E-Learning Plus CPT programme is more accessible and sustainable in the Australian healthcare context, further work will continue on this CPT programme.

  8. Product/Process (P/P) Models For The Defense Waste Processing Facility (DWPF): Model Ranges And Validation Ranges For Future Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jantzen, C.; Edwards, T.

    Radioactive high level waste (HLW) at the Savannah River Site (SRS) has successfully been vitrified into borosilicate glass in the Defense Waste Processing Facility (DWPF) since 1996. Vitrification requires stringent product/process (P/P) constraints since the glass cannot be reworked once it is poured into ten foot tall by two foot diameter canisters. A unique “feed forward” statistical process control (SPC) was developed for this control rather than statistical quality control (SQC). In SPC, the feed composition to the DWPF melter is controlled prior to vitrification. In SQC, the glass product would be sampled after it is vitrified. Individual glass property-compositionmore » models form the basis for the “feed forward” SPC. The models transform constraints on the melt and glass properties into constraints on the feed composition going to the melter in order to guarantee, at the 95% confidence level, that the feed will be processable and that the durability of the resulting waste form will be acceptable to a geologic repository.« less

  9. Persistent homology and non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Cole, Alex; Shiu, Gary

    2018-03-01

    In this paper, we introduce the topological persistence diagram as a statistic for Cosmic Microwave Background (CMB) temperature anisotropy maps. A central concept in 'Topological Data Analysis' (TDA), the idea of persistence is to represent a data set by a family of topological spaces. One then examines how long topological features 'persist' as the family of spaces is traversed. We compute persistence diagrams for simulated CMB temperature anisotropy maps featuring various levels of primordial non-Gaussianity of local type. Postponing the analysis of observational effects, we show that persistence diagrams are more sensitive to local non-Gaussianity than previous topological statistics including the genus and Betti number curves, and can constrain Δ fNLloc= 35.8 at the 68% confidence level on the simulation set, compared to Δ fNLloc= 60.6 for the Betti number curves. Given the resolution of our simulations, we expect applying persistence diagrams to observational data will give constraints competitive with those of the Minkowski Functionals. This is the first in a series of papers where we plan to apply TDA to different shapes of non-Gaussianity in the CMB and Large Scale Structure.

  10. Lower education level is a major risk factor for peritonitis incidence in chronic peritoneal dialysis patients: a retrospective cohort study with 12-year follow-up.

    PubMed

    Chern, Yahn-Bor; Ho, Pei-Shan; Kuo, Li-Chueh; Chen, Jin-Bor

    2013-01-01

    Peritoneal dialysis (PD)-related peritonitis remains an important complication in PD patients, potentially causing technique failure and influencing patient outcome. To date, no comprehensive study in the Taiwanese PD population has used a time-dependent statistical method to analyze the factors associated with PD-related peritonitis. Our single-center retrospective cohort study, conducted in southern Taiwan between February 1999 and July 2010, used time-dependent statistical methods to analyze the factors associated with PD-related peritonitis. The study recruited 404 PD patients for analysis, 150 of whom experienced at least 1 episode of peritonitis during the follow-up period. The incidence rate of peritonitis was highest during the first 6 months after PD start. A comparison of patients in the two groups (peritonitis vs null-peritonitis) by univariate analysis showed that the peritonitis group included fewer men (p = 0.048) and more patients of older age (≥65 years, p = 0.049). In addition, patients who had never received compulsory education showed a statistically higher incidence of PD-related peritonitis in the univariate analysis (p = 0.04). A proportional hazards model identified education level (less than elementary school vs any higher education level) as having an independent association with PD-related peritonitis [hazard ratio (HR): 1.45; 95% confidence interval (CI): 1.01 to 2.06; p = 0.045). Comorbidities measured using the Charlson comorbidity index (score >2 vs ≤2) showed borderline statistical significance (HR: 1.44; 95% CI: 1.00 to 2.13; p = 0.053). A lower education level is a major risk factor for PD-related peritonitis independent of age, sex, hypoalbuminemia, and comorbidities. Our study emphasizes that a comprehensive PD education program is crucial for PD patients with a lower education level.

  11. Prospectively measured triiodothyronine levels are positively associated with breast cancer risk in postmenopausal women

    PubMed Central

    2010-01-01

    Introduction The potential association between hypo- and hyperthyroid disorders and breast cancer has been investigated in a large number of studies during the last decades without conclusive results. This prospective cohort study investigated prediagnostic levels of thyrotropin (TSH) and triiodothyronine (T3) in relation to breast cancer incidence in pre- and postmenopausal women. Methods In the Malmö Preventive Project, 2,696 women had T3 and/or TSH levels measured at baseline. During a mean follow-up of 19.3 years, 173 incident breast cancer cases were retrieved using record linkage with The Swedish Cancer Registry. Quartile cut-points for T3 and TSH were based on the distribution among all women in the study cohort. A Cox's proportional hazards analysis was used to estimate relative risks (RR), with a confidence interval (CI) of 95%. Trends over quartiles of T3 and TSH were calculated considering a P-value < 0.05 as statistically significant. All analyses were repeated for pre- and peri/postmenopausal women separately. Results Overall there was a statistically significant association between T3 and breast cancer risk, the adjusted RR in the fourth quartile, as compared to the first, was 1.87 (1.12 to 3.14). In postmenopausal women the RRs for the second, third and fourth quartiles, as compared to the first, were 3.26 (0.96 to 11.1), 5.53 (1.65 to 18.6) and 6.87 (2.09 to 22.6), (P-trend: < 0.001). There were no such associations in pre-menopausal women, and no statistically significant interaction between T3 and menopausal status. Also, no statistically significant association was seen between serum TSH and breast cancer. Conclusions This is the first prospective study on T3 levels in relation to breast cancer risk. T3 levels in postmenopausal women were positively associated with the risk of breast cancer in a dose-response manner. PMID:20540734

  12. The impact of socioeconomic status and multimorbidity on mortality: a population-based cohort study.

    PubMed

    Lund Jensen, Nikoline; Pedersen, Henrik Søndergaard; Vestergaard, Mogens; Mercer, Stewart W; Glümer, Charlotte; Prior, Anders

    2017-01-01

    Multimorbidity (MM) is more prevalent among people of lower socioeconomic status (SES), and both MM and SES are associated with higher mortality rates. However, little is known about the relationship between SES, MM, and mortality. This study investigates the association between educational level and mortality, and to what extent MM modifies this association. We followed 239,547 individuals invited to participate in the Danish National Health Survey 2010 (mean follow-up time: 3.8 years). MM was assessed by using information on drug prescriptions and diagnoses for 39 long-term conditions. Data on educational level were provided by Statistics Denmark. Date of death was obtained from the Civil Registration System. Information on lifestyle factors and quality of life was collected from the survey. The main outcomes were overall and premature mortality (death before the age of 75). Of a total of 12,480 deaths, 6,607 (9.5%) were of people with low educational level (LEL) and 1,272 (2.3%) were of people with high educational level (HEL). The mortality rate was higher among people with LEL compared with HEL in groups of people with 0-1 disease (hazard ratio: 2.26, 95% confidence interval: 2.00-2.55) and ≥4 diseases (hazard ratio: 1.14, 95% confidence interval: 1.04-1.24), respectively (adjusted model). The absolute number of deaths was six times higher among people with LEL than those with HEL in those with ≥4 diseases. The 1-year cumulative mortality proportions for overall death in those with ≥4 diseases was 5.59% for people with HEL versus 7.27% for people with LEL, and 1-year cumulative mortality proportions for premature death was 2.93% for people with HEL versus 4.04% for people with LEL. Adjusting for potential mediating factors such as lifestyle and quality of life eliminated the statistical association between educational level and mortality in people with MM. Our study suggests that LEL is associated with higher overall and premature mortality and that the association is affected by MM, lifestyle factors, and quality of life.

  13. Evidence for a confidence-accuracy relationship in memory for same- and cross-race faces.

    PubMed

    Nguyen, Thao B; Pezdek, Kathy; Wixted, John T

    2017-12-01

    Discrimination accuracy is usually higher for same- than for cross-race faces, a phenomenon known as the cross-race effect (CRE). According to prior research, the CRE occurs because memories for same- and cross-race faces rely on qualitatively different processes. However, according to a continuous dual-process model of recognition memory, memories that rely on qualitatively different processes do not differ in recognition accuracy when confidence is equated. Thus, although there are differences in overall same- and cross-race discrimination accuracy, confidence-specific accuracy (i.e., recognition accuracy at a particular level of confidence) may not differ. We analysed datasets from four recognition memory studies on same- and cross-race faces to test this hypothesis. Confidence ratings reliably predicted recognition accuracy when performance was above chance levels (Experiments 1, 2, and 3) but not when performance was at chance levels (Experiment 4). Furthermore, at each level of confidence, confidence-specific accuracy for same- and cross-race faces did not significantly differ when overall performance was above chance levels (Experiments 1, 2, and 3) but significantly differed when overall performance was at chance levels (Experiment 4). Thus, under certain conditions, high-confidence same-race and cross-race identifications may be equally reliable.

  14. Elaborating Selected Statistical Concepts with Common Experience.

    ERIC Educational Resources Information Center

    Weaver, Kenneth A.

    1992-01-01

    Presents ways of elaborating statistical concepts so as to make course material more meaningful for students. Describes examples using exclamations, circus and cartoon characters, and falling leaves to illustrate variability, null hypothesis testing, and confidence interval. Concludes that the exercises increase student comprehension of the text…

  15. Statistical inference based on the nonparametric maximum likelihood estimator under double-truncation.

    PubMed

    Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi

    2015-07-01

    Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.

  16. Parents' confidence in recommended childhood vaccinations: Extending the assessment, expanding the context

    PubMed Central

    Nowak, Glen J.; Cacciatore, Michael A.

    2017-01-01

    ABSTRACT There has been significant and growing interest in vaccine hesitancy and confidence in the United States as well as across the globe. While studies have used confidence measures, few studies have provided in-depth assessments and no studies have assessed parents' confidence in vaccines in relationship to other frequently recommended health-related products for young children. This study used a nationally representative sample of 1000 US parents to identify confidence levels for recommended vaccinations, antibiotics, over-the-counter (OTC) medicines, and vitamins for children. The analyses examined associations between confidence ratings, vaccination behaviors and intentions, and trust in healthcare provider, along with associations between confidence ratings and use of the other health-related products. Parents' confidence in vaccines was relatively high and high relative to antibiotics, OTC medicines and vitamins. For all 4 health-related products examined, past product experience and knowledge of bad or adverse outcomes negatively impacted parents' confidence levels. Confidence levels were associated with both trust in advice from their child's healthcare provider and acceptance of healthcare provider recommendations. Parents in some groups, such as those with lower income and education levels, were more likely to have less confidence not just in vaccines, but also in antibiotics and OTC medicines for children. Overall, the findings extend understanding of vaccine confidence, including by placing it into a broader context. PMID:27682979

  17. Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.

    PubMed

    Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas

    2016-11-14

    Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.

  18. Survey mode matters: adults' self-reported statistical confidence, ability to obtain health information, and perceptions of patient-health-care provider communication.

    PubMed

    Wallace, Lorraine S; Chisolm, Deena J; Abdel-Rasoul, Mahmoud; DeVoe, Jennifer E

    2013-08-01

    This study examined adults' self-reported understanding and formatting preferences of medical statistics, confidence in self-care and ability to obtain health advice or information, and perceptions of patient-health-care provider communication measured through dual survey modes (random digital dial and mail). Even while controlling for sociodemographic characteristics, significant differences in regard to adults' responses to survey variables emerged as a function of survey mode. While the analyses do not allow us to pinpoint the underlying causes of the differences observed, they do suggest that mode of administration should be carefully adjusted for and considered.

  19. Practical Advice on Calculating Confidence Intervals for Radioprotection Effects and Reducing Animal Numbers in Radiation Countermeasure Experiments

    PubMed Central

    Landes, Reid D.; Lensing, Shelly Y.; Kodell, Ralph L.; Hauer-Jensen, Martin

    2014-01-01

    The dose of a substance that causes death in P% of a population is called an LDP, where LD stands for lethal dose. In radiation research, a common LDP of interest is the radiation dose that kills 50% of the population by a specified time, i.e., lethal dose 50 or LD50. When comparing LD50 between two populations, relative potency is the parameter of interest. In radiation research, this is commonly known as the dose reduction factor (DRF). Unfortunately, statistical inference on dose reduction factor is seldom reported. We illustrate how to calculate confidence intervals for dose reduction factor, which may then be used for statistical inference. Further, most dose reduction factor experiments use hundreds, rather than tens of animals. Through better dosing strategies and the use of a recently available sample size formula, we also show how animal numbers may be reduced while maintaining high statistical power. The illustrations center on realistic examples comparing LD50 values between a radiation countermeasure group and a radiation-only control. We also provide easy-to-use spreadsheets for sample size calculations and confidence interval calculations, as well as SAS® and R code for the latter. PMID:24164553

  20. Contrasting Academic Behavioural Confidence in Mexican and European Psychology Students

    ERIC Educational Resources Information Center

    Ochoa, Alma Rosa Aguila; Sander, Paul

    2012-01-01

    Introduction: Research with the Academic Behavioural Confidence scale using European students has shown that students have high levels of confidence in their academic abilities. It is generally accepted that people in more collectivist cultures have more realistic confidence levels in contrast to the overconfidence seen in individualistic European…

  1. Parent's confidence as a caregiver.

    PubMed

    Raines, Deborah A; Brustad, Judith

    2012-06-01

    The purpose of this study was to describe the parent's self-reported confidence as a caregiver. The specific research questions were as follows: • What is the parent's perceived level of confidence when performing infant caregiving activities in the neonatal intensive care unit (NICU)? • What is the parent's projected level of confidence about performing infant caregiving activities on the first day at home? Participants were parents of infants with an anticipated discharge date within 5 days. Inclusion criteria were as follows: parent at least 18 years of age, infant's discharge destination is home with the parent, parent will have primary responsibility for the infant after discharge, and the infant's length of stay in the NICU was a minimum of 10 days. Descriptive, survey research. Participants perceived themselves to be confident in all but 2 caregiving activities when caring for their infants in the NICU, but parents projected a change in their level of confidence in their ability to independently complete infant care activities at home. When comparing the self-reported level of confidence in the NICU and the projected level of confidence at home, the levels of confidence decreased for 5 items, increased for 8 items, and remained unchanged for 2 items. All of the items with a decrease in score were the items with the lowest score when performed in the NICU. All of these low-scoring items are caregiving activities that are unique to the post-NICU status of the infant. Interestingly, the parent's projected level of confidence increased for the 8 items focused on handling and interacting with the infant. The findings of this research provide evidence that nurses may need to rethink when parents become active participants in their infant's medical-based caregiving activities.

  2. HPLC determination of caffeine in coffee beverage

    NASA Astrophysics Data System (ADS)

    Fajara, B. E. P.; Susanti, H.

    2017-11-01

    Coffee is the second largest beverage which is consumed by people in the world, besides the water. One of the compounds which contained in coffee is caffeine. Caffeine has the pharmacological effect such as stimulating the central nervous system. The purpose of this study is to determine the level of caffeine in coffee beverages with HPLC method. Three branded coffee beverages which include in 3 of Top Brand Index 2016 Phase 2 were used as samples. Qualitative analysis was performed by Parry method, Dragendorff reagent, and comparing the retention time between sample and caffeine standard. Quantitative analysis was done by HPLC method with methanol-water (95:5v/v) as mobile phase and ODS as stationary phasewith flow rate 1 mL/min and UV 272 nm as the detector. The level of caffeine data was statistically analyzed using Anova at 95% confidence level. The Qualitative analysis showed that the three samples contained caffeine. The average of caffeine level in coffee bottles of X, Y, and Z were 138.048 mg/bottle, 109.699 mg/bottle, and 147.669 mg/bottle, respectively. The caffeine content of the three coffee beverage samples are statistically different (p<0.05). The levels of caffeine contained in X, Y, and Z coffee beverage samples were not meet the requirements set by the Indonesian Standard Agency of 50 mg/serving.

  3. Statistical variances of diffusional properties from ab initio molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    He, Xingfeng; Zhu, Yizhou; Epstein, Alexander; Mo, Yifei

    2018-12-01

    Ab initio molecular dynamics (AIMD) simulation is widely employed in studying diffusion mechanisms and in quantifying diffusional properties of materials. However, AIMD simulations are often limited to a few hundred atoms and a short, sub-nanosecond physical timescale, which leads to models that include only a limited number of diffusion events. As a result, the diffusional properties obtained from AIMD simulations are often plagued by poor statistics. In this paper, we re-examine the process to estimate diffusivity and ionic conductivity from the AIMD simulations and establish the procedure to minimize the fitting errors. In addition, we propose methods for quantifying the statistical variance of the diffusivity and ionic conductivity from the number of diffusion events observed during the AIMD simulation. Since an adequate number of diffusion events must be sampled, AIMD simulations should be sufficiently long and can only be performed on materials with reasonably fast diffusion. We chart the ranges of materials and physical conditions that can be accessible by AIMD simulations in studying diffusional properties. Our work provides the foundation for quantifying the statistical confidence levels of diffusion results from AIMD simulations and for correctly employing this powerful technique.

  4. Accuracy assessment of percent canopy cover, cover type, and size class

    Treesearch

    H. T. Schreuder; S. Bain; R. C. Czaplewski

    2003-01-01

    Truth for vegetation cover percent and type is obtained from very large-scale photography (VLSP), stand structure as measured by size classes, and vegetation types from a combination of VLSP and ground sampling. We recommend using the Kappa statistic with bootstrap confidence intervals for overall accuracy, and similarly bootstrap confidence intervals for percent...

  5. Predictor sort sampling and one-sided confidence bounds on quantiles

    Treesearch

    Steve Verrill; Victoria L. Herian; David W. Green

    2002-01-01

    Predictor sort experiments attempt to make use of the correlation between a predictor that can be measured prior to the start of an experiment and the response variable that we are investigating. Properly designed and analyzed, they can reduce necessary sample sizes, increase statistical power, and reduce the lengths of confidence intervals. However, if the non- random...

  6. Confidence Intervals for Effect Sizes: Compliance and Clinical Significance in the "Journal of Consulting and Clinical Psychology"

    ERIC Educational Resources Information Center

    Odgaard, Eric C.; Fowler, Robert L.

    2010-01-01

    Objective: In 2005, the "Journal of Consulting and Clinical Psychology" ("JCCP") became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest…

  7. SIMREL: Software for Coefficient Alpha and Its Confidence Intervals with Monte Carlo Studies

    ERIC Educational Resources Information Center

    Yurdugul, Halil

    2009-01-01

    This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…

  8. Learner Characteristics Predict Performance and Confidence in E-Learning: An Analysis of User Behavior and Self-Evaluation

    ERIC Educational Resources Information Center

    Jeske, Debora; Roßnagell, Christian Stamov; Backhaus, Joy

    2014-01-01

    We examined the role of learner characteristics as predictors of four aspects of e-learning performance, including knowledge test performance, learning confidence, learning efficiency, and navigational effectiveness. We used both self reports and log file records to compute the relevant statistics. Regression analyses showed that both need for…

  9. Performing Contrast Analysis in Factorial Designs: From NHST to Confidence Intervals and Beyond

    ERIC Educational Resources Information Center

    Wiens, Stefan; Nilsson, Mats E.

    2017-01-01

    Because of the continuing debates about statistics, many researchers may feel confused about how to analyze and interpret data. Current guidelines in psychology advocate the use of effect sizes and confidence intervals (CIs). However, researchers may be unsure about how to extract effect sizes from factorial designs. Contrast analysis is helpful…

  10. Patient confidence regarding secondary lifestyle modification and knowledge of 'heart attack' symptoms following percutaneous revascularisation in Japan: a cross-sectional study.

    PubMed

    Kitakata, Hiroki; Kohno, Takashi; Kohsaka, Shun; Fujino, Junko; Nakano, Naomi; Fukuoka, Ryoma; Yuasa, Shinsuke; Maekawa, Yuichiro; Fukuda, Keiichi

    2018-03-16

    To assess patient perspectives on secondary lifestyle modification and knowledge of 'heart attack' after percutaneous coronary intervention (PCI) for coronary artery disease (CAD). Observational cross-sectional study. A single university-based hospital centre in Japan. In total, 236 consecutive patients with CAD who underwent PCI completed a questionnaire (age, 67.4±10.1 years; women, 14.8%; elective PCI, 75.4%). The survey questionnaire included questions related to confidence levels about (1) lifestyle modification at the time of discharge and (2) appropriate recognition of heart attack symptoms and reactions to these symptoms on a four-point Likert scale (1=not confident to 4=completely confident). The primary outcome assessed was the patients' confidence level regarding lifestyle modification and the recognition of heart attack symptoms. Overall, patients had a high level of confidence (confident or completely confident,>75%) about smoking cessation, alcohol restriction and medication adherence. However, they had a relatively low level of confidence (<50%) about the maintenance of blood pressure control, healthy diet, body weight and routine exercise (≥3 times/week). After adjustment, male sex (OR 3.61, 95% CI 1.11 to 11.8) and lower educational level (OR 3.25; 95% CI 1.70 to 6.23) were identified as factors associated with lower confidence levels. In terms of confidence in the recognition of heart attack, almost all respondents answered 'yes' to the item 'I should go to the hospital as soon as possible when I have a heart attack'; however, only 28% of the responders were confident in their ability to distinguish between heart attack symptoms and other conditions. There were substantial disparities in the confidence levels associated with lifestyle modification and recognition/response to heart attack. These gaps need to be studied further and disseminated to improve cardiovascular care. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  11. Diagnostic Value of Serum YKL-40 Level for Coronary Artery Disease: A Meta-Analysis.

    PubMed

    Song, Chun-Li; Bin-Li; Diao, Hong-Ying; Wang, Jiang-Hua; Shi, Yong-fei; Lu, Yang; Wang, Guan; Guo, Zi-Yuan; Li, Yang-Xue; Liu, Jian-Gen; Wang, Jin-Peng; Zhang, Ji-Chang; Zhao, Zhuo; Liu, Yi-Hang; Li, Ying; Cai, Dan; Li, Qian

    2016-01-01

    This meta-analysis aimed to identify the value of serum YKL-40 level for the diagnosis of coronary artery disease (CAD). Through searching the following electronic databases: the Cochrane Library Database (Issue 12, 2013), Web of Science (1945 ∼ 2013), PubMed (1966 ∼ 2013), CINAHL (1982 ∼ 2013), EMBASE (1980 ∼ 2013), and the Chinese Biomedical Database (CBM; 1982 ∼ 2013), related articles were determined without any language restrictions. STATA statistical software (Version 12.0, Stata Corporation, College Station, TX) was chosen to deal with statistical data. Standard mean difference (SMD) and its corresponding 95% confidence interval (95% CI) were calculated. Eleven clinical case-control studies that recruited 1,175 CAD patients and 1,261 healthy controls were selected for statistical analysis. The main findings of our meta-analysis showed that serum YKL-40 level in CAD patients was significantly higher than that in control subjects (SMD = 2.79, 95% CI = 1.73 ∼ 3.85, P < 0.001). Ethnicity-stratified analysis indicated a higher serum YKL-40 level in CAD patients than control subjects among China, Korea, and Denmark populations (China: SMD = 2.97, 95% CI = 1.21 ∼ 4.74, P = 0.001; Korea: SMD = 0.66, 95% CI = 0.17 ∼ 1.15, P = 0.008; Denmark: SMD = 1.85, 95% CI = 1.42 ∼ 2.29, P < 0.001; respectively), but not in Turkey (SMD = 4.52, 95% CI = -2.87 ∼ 11.91, P = 0.231). The present meta-analysis suggests that an elevated serum YKL-40 level may be used as a promising diagnostic tool for early identification of CAD.

  12. Self-confidence in financial analysis: a study of younger and older male professional analysts.

    PubMed

    Webster, R L; Ellis, T S

    2001-06-01

    Measures of reported self-confidence in performing financial analysis by 59 professional male analysts, 31 born between 1946 and 1964 and 28 born between 1965 and 1976, were investigated and reported. Self-confidence in one's ability is important in the securities industry because it affects recommendations and decisions to buy, sell, and hold securities. The respondents analyzed a set of multiyear corporate financial statements and reported their self-confidence in six separate financial areas. Data from the 59 male financial analysts were tallied and analyzed using both univariate and multivariate statistical tests. Rated self-confidence was not significantly different for the younger and the older men. These results are not consistent with a similar prior study of female analysts in which younger women showed significantly higher self-confidence than older women.

  13. [Design and validation of a questionnaire to assess the level of general knowledge on eating disorders in students of Health Sciences].

    PubMed

    Sánchez Socarrás, Violeida; Aguilar Martínez, Alicia; Vaqué Crusellas, Cristina; Milá Villarroel, Raimon; González Rivas, Fabián

    2016-01-01

    To design and validate a questionnaire to assess the level of knowledge regarding eating disorders in college students. Observational, prospective, and longitudinal study, with the design of the questionnaire based on a conceptual review and validation by a cognitive pre-test and pilot test-retest, with analysis of the psychometric properties in each application. University Foundation of Bages, Barcelona. Marco community care. A total of 140 students from Health Sciences; 53 women and 87 men with a mean age of 21.87 years; 28 participated in the pre-test and 112 in the test-retests, 110 students completed the study. Validity and stability study using Cronbach α and Pearson product-moment correlation coefficient statistics; relationship skills with sex and type of study, non-parametric statistical Mann-Whitney and Kruskal-Wallis tests; for demographic variables, absolute or percentage frequencies, as well as mean, central tendency and standard deviation as measures of dispersion were calculated. The statistical significance level was 95% confidence. The questionnaire was obtained that had 10 questions divided into four dimensions (classification, demographics characteristics of patients, risk factors and clinical manifestations of eating disorders). The scale showed good internal consistency in its final version (Cronbach α=0.724) and adequate stability (Pearson correlation 0.749). The designed tool can be accurately used to assess Health Sciences students' knowledge of eating disorders. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  14. Experimental analysis of computer system dependability

    NASA Technical Reports Server (NTRS)

    Iyer, Ravishankar, K.; Tang, Dong

    1993-01-01

    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance.

  15. The use and misuse of statistical methodologies in pharmacology research.

    PubMed

    Marino, Michael J

    2014-01-01

    Descriptive, exploratory, and inferential statistics are necessary components of hypothesis-driven biomedical research. Despite the ubiquitous need for these tools, the emphasis on statistical methods in pharmacology has become dominated by inferential methods often chosen more by the availability of user-friendly software than by any understanding of the data set or the critical assumptions of the statistical tests. Such frank misuse of statistical methodology and the quest to reach the mystical α<0.05 criteria has hampered research via the publication of incorrect analysis driven by rudimentary statistical training. Perhaps more critically, a poor understanding of statistical tools limits the conclusions that may be drawn from a study by divorcing the investigator from their own data. The net result is a decrease in quality and confidence in research findings, fueling recent controversies over the reproducibility of high profile findings and effects that appear to diminish over time. The recent development of "omics" approaches leading to the production of massive higher dimensional data sets has amplified these issues making it clear that new approaches are needed to appropriately and effectively mine this type of data. Unfortunately, statistical education in the field has not kept pace. This commentary provides a foundation for an intuitive understanding of statistics that fosters an exploratory approach and an appreciation for the assumptions of various statistical tests that hopefully will increase the correct use of statistics, the application of exploratory data analysis, and the use of statistical study design, with the goal of increasing reproducibility and confidence in the literature. Copyright © 2013. Published by Elsevier Inc.

  16. Parental Self-Assessment of Behavioral Effectiveness in Young Children and Views on Corporal Punishment in an Academic Pediatric Practice.

    PubMed

    Irons, Lance B; Flatin, Heidi; Harrington, Maya T; Vazifedan, Turaj; Harrington, John W

    2018-03-01

    This article assesses parental confidence and current behavioral techniques used by mostly African American caregivers of young children in an urban Southeastern setting, including their use and attitudes toward corporal punishment (CP). Two hundred and fifty parental participants of children aged 18 months to 5 years completed a survey on factors affecting their behavioral management and views on CP. Statistical analysis included χ 2 test and logistic regression with confidence interval significance determined at P <.05. Significant associations of CP usage were found in parents who were themselves exposed to CP and parental level of frustration with child disobedience. A total of 40.2% of respondents answered that they had not received any discipline strategies from pediatricians and 47.6% were interested in receiving more behavioral strategies. Clear opportunities exist for pediatricians to provide information on evidence-based disciplinary techniques, and these discussions may be facilitated through the creation of a No Hit Zone program in the pediatric practice.

  17. Estimating degradation in real time and accelerated stability tests with random lot-to-lot variation: a simulation study.

    PubMed

    Magari, Robert T

    2002-03-01

    The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002

  18. QQ-plots for assessing distributions of biomarker measurements and generating defensible summary statistics

    EPA Science Inventory

    One of the main uses of biomarker measurements is to compare different populations to each other and to assess risk in comparison to established parameters. This is most often done using summary statistics such as central tendency, variance components, confidence intervals, excee...

  19. Hitting Is Contagious in Baseball: Evidence from Long Hitting Streaks

    PubMed Central

    Bock, Joel R.; Maewal, Akhilesh; Gough, David A.

    2012-01-01

    Data analysis is used to test the hypothesis that “hitting is contagious”. A statistical model is described to study the effect of a hot hitter upon his teammates’ batting during a consecutive game hitting streak. Box score data for entire seasons comprising streaks of length games, including a total observations were compiled. Treatment and control sample groups () were constructed from core lineups of players on the streaking batter’s team. The percentile method bootstrap was used to calculate confidence intervals for statistics representing differences in the mean distributions of two batting statistics between groups. Batters in the treatment group (hot streak active) showed statistically significant improvements in hitting performance, as compared against the control. Mean for the treatment group was found to be to percentage points higher during hot streaks (mean difference increased points), while the batting heat index introduced here was observed to increase by points. For each performance statistic, the null hypothesis was rejected at the significance level. We conclude that the evidence suggests the potential existence of a “statistical contagion effect”. Psychological mechanisms essential to the empirical results are suggested, as several studies from the scientific literature lend credence to contagious phenomena in sports. Causal inference from these results is difficult, but we suggest and discuss several latent variables that may contribute to the observed results, and offer possible directions for future research. PMID:23251507

  20. Comparison of methods for the removal of organic carbon and extraction of chromium, iron and manganese from an estuarine sediment standard and sediment from the Calcasieu River estuary, Louisiana, U.S.A.

    USGS Publications Warehouse

    Simon, N.S.; Hatcher, S.A.; Demas, C.

    1992-01-01

    U.S. National Bureau of Standards (NBS) estuarine sediment 1646 from the Chesapeake Bay, Maryland, and surface sediment collected at two sites in the Calcasieu River estuary, Louisiana, were used to evaluate the dilute hydrochloric acid extraction of Cr, Fe and Mn from air-dried and freeze-dried samples that had been treated by one of three methods to remove organic carbon. The three methods for the oxidation and removal of organic carbon were: (1) 30% hydrogen peroxide; (2) 30% hydrogen peroxide plus 0.25 mM pyrophosphate; and (3) plasma oxidation (low-temperature ashing). There was no statistically significant difference at the 95% confidence level between air- and freeze-dried samples with respect to the percent of organic carbon removed by the three methods. Generally, there was no statistically significant difference at the 95% confidence level between air- and freeze-dried samples with respect to the concentration of Cr, Fe and Mn that was extracted, regardless of the extraction technique that was used. Hydrogen peroxide plus pyrophosphate removed the most organic carbon from sediment collected at the site in the Calcasieu River that was upstream from industrial outfalls. Plasma oxidation removed the most organic carbon from the sediment collected at a site in the Calcasieu River close to industrial outfalls and from the NBS estuarine sediment sample. Plasma oxidation merits further study as a treatment for removal of organic carbon. Operational parameters can be chosen to limit the plasma oxidation of pyrite which, unlike other Fe species, will not be dissolved by dilute hydrochloric acid. Preservation of pyrite allows the positive identification of Fe present as pyrite in sediments. ?? 1992.

  1. Impact of Land Surface Initialization Approach on Subseasonal Forecast Skill: a Regional Analysis in the Southern Hemisphere

    NASA Technical Reports Server (NTRS)

    Hirsch, Annette L.; Kala, Jatin; Pitman, Andy J.; Carouge, Claire; Evans, Jason P.; Haverd, Vanessa; Mocko, David

    2014-01-01

    The authors use a sophisticated coupled land-atmosphere modeling system for a Southern Hemisphere subdomain centered over southeastern Australia to evaluate differences in simulation skill from two different land surface initialization approaches. The first approach uses equilibrated land surface states obtained from offline simulations of the land surface model, and the second uses land surface states obtained from reanalyses. The authors find that land surface initialization using prior offline simulations contribute to relative gains in subseasonal forecast skill. In particular, relative gains in forecast skill for temperature of 10%-20% within the first 30 days of the forecast can be attributed to the land surface initialization method using offline states. For precipitation there is no distinct preference for the land surface initialization method, with limited gains in forecast skill irrespective of the lead time. The authors evaluated the asymmetry between maximum and minimum temperatures and found that maximum temperatures had the largest gains in relative forecast skill, exceeding 20% in some regions. These results were statistically significant at the 98% confidence level at up to 60 days into the forecast period. For minimum temperature, using reanalyses to initialize the land surface contributed to relative gains in forecast skill, reaching 40% in parts of the domain that were statistically significant at the 98% confidence level. The contrasting impact of the land surface initialization method between maximum and minimum temperature was associated with different soil moisture coupling mechanisms. Therefore, land surface initialization from prior offline simulations does improve predictability for temperature, particularly maximum temperature, but with less obvious improvements for precipitation and minimum temperature over southeastern Australia.

  2. Self-Reported Recovery from 2-Week 12-Hour Shift Work Schedules: A 14-Day Follow-Up

    PubMed Central

    Merkus, Suzanne L.; Holte, Kari Anne; Huysmans, Maaike A.; van de Ven, Peter M.; van Mechelen, Willem; van der Beek, Allard J.

    2015-01-01

    Background Recovery from fatigue is important in maintaining night workers' health. This study compared the course of self-reported recovery after 2-week 12-hour schedules consisting of either night shifts or swing shifts (i.e., 7 night shifts followed by 7 day shifts) to such schedules consisting of only day work. Methods Sixty-one male offshore employees—20 night workers, 16 swing shift workers, and 25 day workers—rated six questions on fatigue (sleep quality, feeling rested, physical and mental fatigue, and energy levels; scale 1–11) for 14 days after an offshore tour. After the two night-work schedules, differences on the 1st day (main effects) and differences during the follow-up (interaction effects) were compared to day work with generalized estimating equations analysis. Results After adjustment for confounders, significant main effects were found for sleep quality for night workers (1.41, 95% confidence interval 1.05–1.89) and swing shift workers (1.42, 95% confidence interval 1.03–1.94) when compared to day workers; their interaction terms were not statistically significant. For the remaining fatigue outcomes, no statistically significant main or interaction effects were found. Conclusion After 2-week 12-hour night and swing shifts, only the course for sleep quality differed from that of day work. Sleep quality was poorer for night and swing shift workers on the 1st day off and remained poorer for the 14-day follow-up. This showed that while working at night had no effect on feeling rested, tiredness, and energy levels, it had a relatively long-lasting effect on sleep quality. PMID:26929834

  3. Research awareness: An important factor for evidence-based practice?

    PubMed

    McSherry, Robert; Artley, Angela; Holloran, Jan

    2006-01-01

    Despite the growing body of literature, the reality of getting evidence into practice remains problematic. The purpose of this study was to establish levels of research awareness amongst registered health care professionals (RHCPs) and the influence of research awareness on evidence-based practice activities. This was a descriptive quantitative study. A convenience sample of 2,126 registered RHCPs working in a large acute hospital in Northeast England, the United Kingdom was used. A self-completion Research Awareness Questionnaire (RAQ) was directed towards measuring RHCP: attitudes towards research, understanding of research and the research process, and associations with practising using an evidence base. Data were entered into a Statistical Package for Social Science (SPSS) database and descriptive and inferential statistics were used. A total of 843 questionnaires were returned. Seven hundred and thirty-three (91%) RHCPs overwhelmingly agreed with the principle that evidence-based practice has a large part to play in improving patient care. This point was reinforced by 86% (n = 701) of respondents strongly agreeing or agreeing with the idea that evidence-based practice is the way forward to change clinical practice. Significant associations were noted between levels of confidence to undertake a piece of research and whether the individual had received adequate information about the research process, had basic knowledge and understanding of the research process, or had research awareness education or training. The study shows that RHCPs, regardless of position or grade, have a positive attitude towards research but face many obstacles. The key obstacles are lack of time, support, knowledge, and confidence. To address these obstacles, it is imperative that the organisation adopts a structured and coordinated approach to enable and empower individuals to practice using an evidence base.

  4. Mean bond-length variations in crystals for ions bonded to oxygen

    PubMed Central

    2017-01-01

    Variations in mean bond length are examined in oxide and oxysalt crystals for 55 cation configurations bonded to O2−. Stepwise multiple regression analysis shows that mean bond length is correlated to bond-length distortion in 42 ion configurations at the 95% confidence level, with a mean coefficient of determination (〈R 2〉) of 0.35. Previously published correlations between mean bond length and mean coordination number of the bonded anions are found not to be of general applicability to inorganic oxide and oxysalt structures. For two of 11 ions tested for the 95% confidence level, mean bond lengths predicted using a fixed radius for O2− are significantly more accurate as those predicted using an O2− radius dependent on coordination number, and are statistically identical otherwise. As a result, the currently accepted ionic radii for O2− in different coordinations are not justified by experimental data. Previously reported correlation between mean bond length and the mean electronegativity of the cations bonded to the oxygen atoms of the coordination polyhedron is shown to be statistically insignificant; similar results are obtained with regard to ionization energy. It is shown that a priori bond lengths calculated for many ion configurations in a single structure-type leads to a high correlation between a priori and observed mean bond lengths, but a priori bond lengths calculated for a single ion configuration in many different structure-types leads to negligible correlation between a priori and observed mean bond lengths. This indicates that structure type has a major effect on mean bond length, the magnitude of which goes beyond that of the other variables analyzed here.

  5. Standardisation of a European measurement method for organic carbon and elemental carbon in ambient air: results of the field trial campaign and the determination of a measurement uncertainty and working range.

    PubMed

    Brown, Richard J C; Beccaceci, Sonya; Butterfield, David M; Quincey, Paul G; Harris, Peter M; Maggos, Thomas; Panteliadis, Pavlos; John, Astrid; Jedynska, Aleksandra; Kuhlbusch, Thomas A J; Putaud, Jean-Philippe; Karanasiou, Angeliki

    2017-10-18

    The European Committee for Standardisation (CEN) Technical Committee 264 'Air Quality' has recently produced a standard method for the measurements of organic carbon and elemental carbon in PM 2.5 within its working group 35 in response to the requirements of European Directive 2008/50/EC. It is expected that this method will be used in future by all Member States making measurements of the carbonaceous content of PM 2.5 . This paper details the results of a laboratory and field measurement campaign and the statistical analysis performed to validate the standard method, assess its uncertainty and define its working range to provide clarity and confidence in the underpinning science for future users of the method. The statistical analysis showed that the expanded combined uncertainty for transmittance protocol measurements of OC, EC and TC is expected to be below 25%, at the 95% level of confidence, above filter loadings of 2 μg cm -2 . An estimation of the detection limit of the method for total carbon was 2 μg cm -2 . As a result of the laboratory and field measurement campaign the EUSAAR2 transmittance measurement protocol was chosen as the basis of the standard method EN 16909:2017.

  6. Health significance and statistical uncertainty. The value of P-value.

    PubMed

    Consonni, Dario; Bertazzi, Pier Alberto

    2017-10-27

    The P-value is widely used as a summary statistics of scientific results. Unfortunately, there is a widespread tendency to dichotomize its value in "P<0.05" (defined as "statistically significant") and "P>0.05" ("statistically not significant"), with the former implying a "positive" result and the latter a "negative" one. To show the unsuitability of such an approach when evaluating the effects of environmental and occupational risk factors. We provide examples of distorted use of P-value and of the negative consequences for science and public health of such a black-and-white vision. The rigid interpretation of P-value as a dichotomy favors the confusion between health relevance and statistical significance, discourages thoughtful thinking, and distorts attention from what really matters, the health significance. A much better way to express and communicate scientific results involves reporting effect estimates (e.g., risks, risks ratios or risk differences) and their confidence intervals (CI), which summarize and convey both health significance and statistical uncertainty. Unfortunately, many researchers do not usually consider the whole interval of CI but only examine if it includes the null-value, therefore degrading this procedure to the same P-value dichotomy (statistical significance or not). In reporting statistical results of scientific research present effects estimates with their confidence intervals and do not qualify the P-value as "significant" or "not significant".

  7. Validation of an instrument to assess evidence-based practice knowledge, attitudes, access, and confidence in the dental environment.

    PubMed

    Hendricson, William D; Rugh, John D; Hatch, John P; Stark, Debra L; Deahl, Thomas; Wallmann, Elizabeth R

    2011-02-01

    This article reports the validation of an assessment instrument designed to measure the outcomes of training in evidence-based practice (EBP) in the context of dentistry. Four EBP dimensions are measured by this instrument: 1) understanding of EBP concepts, 2) attitudes about EBP, 3) evidence-accessing methods, and 4) confidence in critical appraisal. The instrument-the Knowledge, Attitudes, Access, and Confidence Evaluation (KACE)-has four scales, with a total of thirty-five items: EBP knowledge (ten items), EBP attitudes (ten), accessing evidence (nine), and confidence (six). Four elements of validity were assessed: consistency of items within the KACE scales (extent to which items within a scale measure the same dimension), discrimination (capacity to detect differences between individuals with different training or experience), responsiveness (capacity to detect the effects of education on trainees), and test-retest reliability. Internal consistency of scales was assessed by analyzing responses of second-year dental students, dental residents, and dental faculty members using Cronbach coefficient alpha, a statistical measure of reliability. Discriminative validity was assessed by comparing KACE scores for the three groups. Responsiveness was assessed by comparing pre- and post-training responses for dental students and residents. To measure test-retest reliability, the full KACE was completed twice by a class of freshman dental students seventeen days apart, and the knowledge scale was completed twice by sixteen faculty members fourteen days apart. Item-to-scale consistency ranged from 0.21 to 0.78 for knowledge, 0.57 to 0.83 for attitude, 0.70 to 0.84 for accessing evidence, and 0.87 to 0.94 for confidence. For discrimination, ANOVA and post hoc testing by the Tukey-Kramer method revealed significant score differences among students, residents, and faculty members consistent with education and experience levels. For responsiveness to training, dental students and residents demonstrated statistically significant changes, in desired directions, from pre- to post-test. For the student test-retest, Pearson correlations for KACE scales were as follows: knowledge 0.66, attitudes 0.66, accessing evidence 0.74, and confidence 0.76. For the knowledge scale test-retest by faculty members, the Pearson correlation was 0.79. The construct validity of the KACE is equivalent to that of instruments that assess similar EBP dimensions in medicine. Item consistency for the knowledge scale was more variable than for other KACE scales, a finding also reported for medically oriented EBP instruments. We conclude that the KACE has good discriminative validity, responsiveness to training effects, and test-retest reliability.

  8. Self-assessment of nursing competency among final year nursing students in Thailand: a comparison between public and private nursing institutions.

    PubMed

    Sawaengdee, Krisada; Kantamaturapoj, Kanang; Seneerattanaprayul, Parinda; Putthasri, Weerasak; Suphanchaimat, Rapeepong

    2016-01-01

    Nurses play a major role in Thailand's health care system. In recent years, the production of nurses, in both the public and private sectors, has been growing rapidly to respond to the shortage of health care staff. Alongside concerns over the number of nurses produced, the quality of nursing graduates is of equal importance. This study therefore aimed to 1) compare the self-assessed competency of final year Thai nursing students between public and private nursing schools, and 2) explore factors that were significantly associated with competency level. A cross-sectional clustered survey was conducted on 40 Thai nursing schools. Data were collected through self-administered questionnaires. The questionnaire consisted of questions about respondents' background, their education profile, and a self-measured competency list. Descriptive statistics, factor analysis, and multivariate regression analysis were applied. A total of 3,349 students participated in the survey. Approximately half of the respondents had spent their childhood in rural areas. The majority of respondents reported being "confident" or "very confident" in all competencies. Private nursing students reported a higher level of "public health competency" than public nursing students with statistical significance. However, there was no significant difference in "clinical competency" between the two groups. Nursing students from private institutions seemed to report higher levels of competency than those from public institutions, particularly with regard to public health. This phenomenon might have arisen because private nursing students had greater experience of diverse working environments during their training. One of the key limitations of this study was that the results were based on the subjective self-assessment of the respondents, which might risk respondent bias. Further studies that evaluate current nursing curricula in both public and private nursing schools to assess whether they meet the health needs of the population are recommended.

  9. Predicting Mortality in African Americans With Type 2 Diabetes Mellitus: Soluble Urokinase Plasminogen Activator Receptor, Coronary Artery Calcium, and High-Sensitivity C-Reactive Protein.

    PubMed

    Hayek, Salim S; Divers, Jasmin; Raad, Mohamad; Xu, Jianzhao; Bowden, Donald W; Tracy, Melissa; Reiser, Jochen; Freedman, Barry I

    2018-05-01

    Type 2 diabetes mellitus is a major risk factor for cardiovascular disease; however, outcomes in individual patients vary. Soluble urokinase plasminogen activator receptor (suPAR) is a bone marrow-derived signaling molecule associated with adverse cardiovascular and renal outcomes in many populations. We characterized the determinants of suPAR in African Americans with type 2 diabetes mellitus and assessed whether levels were useful for predicting mortality beyond clinical characteristics, coronary artery calcium (CAC), and high-sensitivity C-reactive protein (hs-CRP). We measured plasma suPAR levels in 500 African Americans with type 2 diabetes mellitus enrolled in the African American-Diabetes Heart Study. We used Kaplan-Meier curves and Cox proportional hazards models adjusting for clinical characteristics, CAC, and hs-CRP to examine the association between suPAR and all-cause mortality. Last, we report the change in C-statistics comparing the additive values of suPAR, hs-CRP, and CAC to clinical models for prediction of mortality. The suPAR levels were independently associated with female sex, smoking, insulin use, decreased kidney function, albuminuria, and CAC. After a median 6.8-year follow-up, a total of 68 deaths (13.6%) were recorded. In a model incorporating suPAR, CAC, and hs-CRP, only suPAR was significantly associated with mortality (hazard ratio 2.66, 95% confidence interval 1.63-4.34). Addition of suPAR to a baseline clinical model significantly improved the C-statistic for all-cause death (Δ0.05, 95% confidence interval 0.01-0.10), whereas addition of CAC or hs-CRP did not. In African Americans with type 2 diabetes mellitus, suPAR was strongly associated with mortality and improved risk discrimination metrics beyond traditional risk factors, CAC and hs-CRP. Studies addressing the clinical usefulness of measuring suPAR concentrations are warranted. © 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  10. Predicting duration of mechanical ventilation in patients with carbon monoxide poisoning: a retrospective study.

    PubMed

    Shen, Chih-Hao; Peng, Chung-Kan; Chou, Yu-Ching; Pan, Ke-Ting; Chang, Shun-Cheng; Chang, Shan-Yueh; Huang, Kun-Lun

    2015-02-01

    Patients with severe carbon monoxide (CO) poisoning may develop acute respiratory failure, which needs endotracheal intubation and mechanical ventilation (MV). The objective of this study was to identify the predictors for duration of MV in patients with severe CO poisoning and acute respiratory failure. This is a retrospective observational study of 796 consecutive patients diagnosed with acute CO poisoning that presented to the emergency department. Patients who received MV were divided into 2 groups: the early extubation (EE) consisting of patients who were on MV for less than 72 hours and the nonearly extubation (NEE) consisting of patients who were on MV for more than 72 hours. Demographic and clinical data of the two groups were extracted for analysis. The intubation rate of all CO-poisoned patients was 23.4%. A total of 168 patients were enrolled in this study. The main source of CO exposure was intentional CO poisoning by charcoal burning (137 patients). Positive toxicology screening result was found in 104 patients (61.9%). The EE group had 105 patients (62.5%). On arriving at the emergency department, high incidence of hypotension; high white blood cell count; and elevation of blood urea nitrogen, creatinine, aspartate aminotransferase, alanine aminotransferase, creatine kinase, and troponin-I levels were statistically significant in the NEE group (P < .05). Positive toxicology screening result was statistically significant in the EE group (P < .05). In a multivariate analysis, elevation of troponin-I level was an independent factor for NEE (odds ratio, 1.305; 95% confidence interval, 1.024-1.663; P = .032). Positive toxicology screening result was an independent factor for EE (odds ratio, 0.222; 95% confidence interval, 0.101-0.489; P = .001). A positive toxin screen predicts extubation within the first 72 hours for patients with severe CO poisoning and acute respiratory failure. On the other hand, elevation of initial troponin-I level is a predictor for a longer duration of MV. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Stewards of children education: Increasing undergraduate nursing student knowledge of child sexual abuse.

    PubMed

    Taylor, L Elaine; Harris, Heather S

    2018-01-01

    Child sexual abuse and exploitation are an increasing public health problem. In spite of the fact that nurses are in a unique position to identify and intervene in the lives of children suffering from abuse due to their role in providing health care in a variety of settings, nursing curricula does not routinely include this focus. The goal was to document the effectiveness of the Stewards of Children child sexual abuse training as an effective educational intervention to increase the knowledge level of undergraduate nursing students on how to prevent, recognize, and react responsibly to child sexual abuse and trafficking. Undergraduate nursing students were required to take the Stewards of Children training in their last semester prior to graduation. Students in the study were given a pre-test prior to the class and a post-test following the class. Pre- and post-tests were graded and the results were compared along with an item indicating the participants' perception of the educational intervention in improving their confidence and competence in this area. Data analysis revealed that post-test scores following training were significantly improved: pre-test mean=45.5%; post-test mean score=91.9%. The statistical significance of the improvement was marked, p<0.01, N=119. The mean response for the perceived values scale was 1.65 from a potential score of 2. This study found a statistically significant increase in the knowledge level of undergraduate nursing students on how to prevent, recognize, and react responsibly to child sexual abuse and trafficking following the Stewards of Children training. Students also reported a high level of confidence in how to prevent abuse and react skillfully when child sexual abuse had occurred. The authors concluded that Stewards of Children is an effective option to educate nursing students on this topic. Copyright © 2017. Published by Elsevier Ltd.

  12. Acculturative stress and inflammation among Chinese immigrant women.

    PubMed

    Fang, Carolyn Y; Ross, Eric A; Pathak, Harsh B; Godwin, Andrew K; Tseng, Marilyn

    2014-06-01

    Among Chinese immigrant populations, increasing duration of US residence is associated with elevated risk for various chronic diseases. Although life-style changes after migration have been extensively studied in immigrant populations, the psychosocial impact of acculturative stress on biological markers of health is less understood. Thus, the purpose of the present study is to examine associations between acculturative stress and inflammatory markers in a Chinese immigrant population. Study participants (n = 407 foreign-born Chinese American women) completed questionnaires assessing levels of stress, including acculturative stress and positive and negative life events in the previous year. Participant height and weight were measured using standard protocols, and blood samples were drawn for assessment of circulating serum levels of C-reactive protein (CRP) and soluble tumor necrosis factor receptor 2 (sTNFR2). Higher levels of acculturative stress were significantly associated with higher levels of CRP (B = 0.07, 95% confidence interval = 0.01-0.13, p = .031) and sTNFR2 (B = 0.02, 95% confidence interval = 0.004-0.03, p = .012), adjusting for age and body mass index. The latter association was no longer statistically significant when overall acculturation (i.e., identification with American culture) was included in the model. Life events were not associated with CRP or sTNFR2. This is one of the first studies to demonstrate that acculturative stress is associated with inflammatory markers in a Chinese immigrant population. Replication in other immigrant samples is needed to fully establish the biological correlates and clinical consequences of acculturative stress.

  13. Assessment of bioequivalence of rifampicin, isoniazid and pyrazinamide in a four drug fixed dose combination with separate formulations at the same dose levels.

    PubMed

    Agrawal, Shrutidevi; Kaur, Kanwal Jit; Singh, Inderjit; Bhade, Shantaram R; Kaul, Chaman Lal; Panchagnula, Ramesh

    2002-02-21

    Tuberculosis (TB) needs treatment with three to five different drugs simultaneously, depending on the patient category. These drugs can be given as single drug preparations or fixed dose combinations (FDCs) of two more drugs in a single formulation. World Health Organization and International Union against Tuberculosis and Lung Disease (IUATLD) recommend FDCs only of proven bioavailability. The relative bioavailability of rifampicin (RIF), isoniazid (INH) and pyrazinamide (PYZ) was assessed on a group of 13 healthy male subjects from a four drug FDC versus separate formulations at the same dose levels. The study was designed to be an open, crossover experiment. A total of nine blood samples each of 3 ml volume were collected over a period of 24-h. The concentrations of RIF, its main metabolite desacetyl RIF (DRIF), INH and PYZ in plasma were assessed by HPLC analysis. Pharmacokinetic parameters namely AUC(0-24), AUC(0-inf), C(max), T(max), were calculated and subjected to different statistical tests (Hauschke analysis, two way ANOVA, normal and log transformed confidence interval) at 90% confidence interval. In addition, elimination rate constant (K(el)) and absorption efficiencies for each drug were also calculated. It was concluded that four drugs FDC tablet is bioequivalent for RIF, INH and PYZ to separate formulation at the same dose levels.

  14. The association between socioeconomic status and autism diagnosis in the United Kingdom for children aged 5-8 years of age: Findings from the Born in Bradford cohort.

    PubMed

    Kelly, Brian; Williams, Stefan; Collins, Sylvie; Mushtaq, Faisal; Mon-Williams, Mark; Wright, Barry; Mason, Dan; Wright, John

    2017-11-01

    There has been recent interest in the relationship between socioeconomic status and the diagnosis of autism in children. Studies in the United States have found lower rates of autism diagnosis associated with lower socioeconomic status, while studies in other countries report no association, or the opposite. This article aims to contribute to the understanding of this relationship in the United Kingdom. Using data from the Born in Bradford cohort, comprising 13,857 children born between 2007 and 2011, it was found that children of mothers educated to A-level or above had twice the rate of autism diagnosis, 1.5% of children (95% confidence interval: 1.1%, 1.9%) compared to children of mothers with lower levels of education status 0.7% (95% confidence interval: 0.5%, 0.9%). No statistically significant relationship between income status or neighbourhood material deprivation was found after controlling for mothers education status. The results suggest a substantial level of underdiagnosis for children of lower education status mothers, though further research is required to determine the extent to which this is replicated across the United Kingdom. Tackling inequalities in autism diagnosis will require action, which could include increased education, awareness, further exploration of the usefulness of screening programmes and the provision of more accessible support services.

  15. Statistical Significance vs. Practical Significance: An Exploration through Health Education

    ERIC Educational Resources Information Center

    Rosen, Brittany L.; DeMaria, Andrea L.

    2012-01-01

    The purpose of this paper is to examine the differences between statistical and practical significance, including strengths and criticisms of both methods, as well as provide information surrounding the application of various effect sizes and confidence intervals within health education research. Provided are recommendations, explanations and…

  16. Bayesian Posterior Odds Ratios: Statistical Tools for Collaborative Evaluations

    ERIC Educational Resources Information Center

    Hicks, Tyler; Rodríguez-Campos, Liliana; Choi, Jeong Hoon

    2018-01-01

    To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices…

  17. Prospective Teachers' Problem Solving Skills and Self-Confidence Levels

    ERIC Educational Resources Information Center

    Gursen Otacioglu, Sena

    2008-01-01

    The basic objective of the research is to determine whether the education that prospective teachers in different fields receive is related to their levels of problem solving skills and self-confidence. Within the mentioned framework, the prospective teachers' problem solving and self-confidence levels have been examined under several variables.…

  18. The Global Error Assessment (GEA) model for the selection of differentially expressed genes in microarray data.

    PubMed

    Mansourian, Robert; Mutch, David M; Antille, Nicolas; Aubert, Jerome; Fogel, Paul; Le Goff, Jean-Marc; Moulin, Julie; Petrov, Anton; Rytz, Andreas; Voegel, Johannes J; Roberts, Matthew-Alan

    2004-11-01

    Microarray technology has become a powerful research tool in many fields of study; however, the cost of microarrays often results in the use of a low number of replicates (k). Under circumstances where k is low, it becomes difficult to perform standard statistical tests to extract the most biologically significant experimental results. Other more advanced statistical tests have been developed; however, their use and interpretation often remain difficult to implement in routine biological research. The present work outlines a method that achieves sufficient statistical power for selecting differentially expressed genes under conditions of low k, while remaining as an intuitive and computationally efficient procedure. The present study describes a Global Error Assessment (GEA) methodology to select differentially expressed genes in microarray datasets, and was developed using an in vitro experiment that compared control and interferon-gamma treated skin cells. In this experiment, up to nine replicates were used to confidently estimate error, thereby enabling methods of different statistical power to be compared. Gene expression results of a similar absolute expression are binned, so as to enable a highly accurate local estimate of the mean squared error within conditions. The model then relates variability of gene expression in each bin to absolute expression levels and uses this in a test derived from the classical ANOVA. The GEA selection method is compared with both the classical and permutational ANOVA tests, and demonstrates an increased stability, robustness and confidence in gene selection. A subset of the selected genes were validated by real-time reverse transcription-polymerase chain reaction (RT-PCR). All these results suggest that GEA methodology is (i) suitable for selection of differentially expressed genes in microarray data, (ii) intuitive and computationally efficient and (iii) especially advantageous under conditions of low k. The GEA code for R software is freely available upon request to authors.

  19. Associations of Lipoprotein(a) Levels With Incident Atrial Fibrillation and Ischemic Stroke: The ARIC (Atherosclerosis Risk in Communities) Study.

    PubMed

    Aronis, Konstantinos N; Zhao, Di; Hoogeveen, Ron C; Alonso, Alvaro; Ballantyne, Christie M; Guallar, Eliseo; Jones, Steven R; Martin, Seth S; Nazarian, Saman; Steffen, Brian T; Virani, Salim S; Michos, Erin D

    2017-12-15

    Lipoprotein(a) (Lp[a]) is proatherosclerotic and prothrombotic, causally related to coronary disease, and associated with other cardiovascular diseases. The association of Lp(a) with incident atrial fibrillation (AF) and with ischemic stroke among individuals with AF remains to be elucidated. In the community-based ARIC (Atherosclerosis Risk in Communities) study cohort, Lp(a) levels were measured by a Denka Seiken assay at visit 4 (1996-1998). We used multivariable-adjusted Cox models to compare AF and ischemic stroke risk across Lp(a) levels. First, we evaluated incident AF in 9908 participants free of AF at baseline. AF was ascertained by electrocardiography at study visits, hospital International Statistical Classification of Diseases, 9th Revision ( ICD-9 ) codes, and death certificates. We then evaluated incident ischemic stroke in 10 127 participants free of stroke at baseline. Stroke was identified by annual phone calls, hospital ICD-9 Revision codes, and death certificates. The baseline age was 62.7±5.6 years. Median Lp(a) levels were 13.3 mg/dL (interquartile range, 5.2-39.7 mg/dL). Median follow-up was 13.9 and 15.8 years for AF and stroke, respectively. Lp(a) was not associated with incident AF (hazard ratio, 0.98; 95% confidence interval, 0.82-1.17), comparing those with Lp(a) ≥50 with those with Lp(a) <10 mg/dL. High Lp(a) was associated with a 42% relative increase in stroke risk among participants without AF (hazard ratio, 1.42; 95% confidence interval, 1.07-1.90) but not in those with AF (hazard ratio, 1.06; 95% confidence interval, 0.70-1.61 [ P interaction for AF=0.25]). There were no interactions by race or sex. No association was found for cardioembolic stroke subtype. High Lp(a) levels were not associated with incident AF. Lp(a) levels were associated with increased ischemic stroke risk, primarily among individuals without AF but not in those with AF. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  20. Change in ozone trends at southern high latitudes

    NASA Technical Reports Server (NTRS)

    Yang, E.-S.; Cunnold, D. M.; Newchurch, M. J.; Salawitch, R. J.

    2005-01-01

    Long-term ozone variations at 60-70degS in spring are investigated using ground-based and satellite measurements. Strong positive correlation is shown between year-to-year variations of ozone and temperature in the Antarctic collar region in Septembers and Octobers. Based on this relationship, the effect of year-to-year variations in vortex dynamics has been filtered out. This process results in an ozone time series that shows increasing springtime ozone losses over the Antarctic until the mid-1990s. Since approximately 1997 the ozone losses have leveled off. The analysis confirms that this change is consistent across all instruments and is statistically significant at the 95% confidence level. This analysis quantifies the beginning of the recovery of the ozone hole, which is expected from the leveling off of stratospheric halogen loading due to the ban on CFCs and other halocarbons initiated by the Montreal Protocol.

  1. Innovations in curriculum design: A multi-disciplinary approach to teaching statistics to undergraduate medical students

    PubMed Central

    Freeman, Jenny V; Collier, Steve; Staniforth, David; Smith, Kevin J

    2008-01-01

    Background Statistics is relevant to students and practitioners in medicine and health sciences and is increasingly taught as part of the medical curriculum. However, it is common for students to dislike and under-perform in statistics. We sought to address these issues by redesigning the way that statistics is taught. Methods The project brought together a statistician, clinician and educational experts to re-conceptualize the syllabus, and focused on developing different methods of delivery. New teaching materials, including videos, animations and contextualized workbooks were designed and produced, placing greater emphasis on applying statistics and interpreting data. Results Two cohorts of students were evaluated, one with old style and one with new style teaching. Both were similar with respect to age, gender and previous level of statistics. Students who were taught using the new approach could better define the key concepts of p-value and confidence interval (p < 0.001 for both). They were more likely to regard statistics as integral to medical practice (p = 0.03), and to expect to use it in their medical career (p = 0.003). There was no significant difference in the numbers who thought that statistics was essential to understand the literature (p = 0.28) and those who felt comfortable with the basics of statistics (p = 0.06). More than half the students in both cohorts felt that they were comfortable with the basics of medical statistics. Conclusion Using a variety of media, and placing emphasis on interpretation can help make teaching, learning and understanding of statistics more people-centred and relevant, resulting in better outcomes for students. PMID:18452599

  2. Evaluation of the statistical evidence for Characteristic Earthquakes in the frequency-magnitude distributions of Sumatra and other subduction zone regions

    NASA Astrophysics Data System (ADS)

    Naylor, M.; Main, I. G.; Greenhough, J.; Bell, A. F.; McCloskey, J.

    2009-04-01

    The Sumatran Boxing Day earthquake and subsequent large events provide an opportunity to re-evaluate the statistical evidence for characteristic earthquake events in frequency-magnitude distributions. Our aims are to (i) improve intuition regarding the properties of samples drawn from power laws, (ii) illustrate using random samples how appropriate Poisson confidence intervals can both aid the eye and provide an appropriate statistical evaluation of data drawn from power-law distributions, and (iii) apply these confidence intervals to test for evidence of characteristic earthquakes in subduction-zone frequency-magnitude distributions. We find no need for a characteristic model to describe frequency magnitude distributions in any of the investigated subduction zones, including Sumatra, due to an emergent skew in residuals of power law count data at high magnitudes combined with a sample bias for examining large earthquakes as candidate characteristic events.

  3. Effect of CALIPSO Cloud Aerosol Discrimination (CAD) Confidence Levels on Observations of Aerosol Properties near Clouds

    NASA Technical Reports Server (NTRS)

    Yang, Weidong; Marshak, Alexander; Varnai, Tamas; Liu, Zhaoyan

    2012-01-01

    CALIPSO aerosol backscatter enhancement in the transition zone between clouds and clear sky areas is revisited with particular attention to effects of data selection based on the confidence level of cloud-aerosol discrimination (CAD). The results show that backscatter behavior in the transition zone strongly depends on the CAD confidence level. Higher confidence level data has a flatter backscatter far away from clouds and a much sharper increase near clouds (within 4 km), thus a smaller transition zone. For high confidence level data it is shown that the overall backscatter enhancement is more pronounced for small clear-air segments and horizontally larger clouds. The results suggest that data selection based on CAD reduces the possible effects of cloud contamination when studying aerosol properties in the vicinity of clouds.

  4. A Statistical Tool for Risk Assessment as Function of Number of Retrieved Lymph Nodes from Rectal Cancer Patients.

    PubMed

    Wu, Zhenyu; Qin, Guoyou; Zhao, Naiqing; Jia, Huixun; Zheng, Xueying

    2018-05-16

    Although a minimum of 12 lymph nodes (LNs) has been recommended for colorectal cancer, there remains considerable debates for rectal cancer patients. Inadequacy of examined LNs would lead to under-staging, and inappropriate treatment as a consequence. We describe statistical tool that allows an estimate the probability of false-negative nodes. A total of 26,778 adenocarcinoma rectum cancer patients with tumour stage (T stage) 1-3, diagnosed between 2004 and 2013, who did not receive neoadjuvant therapies and had at least one histologically assessed LN, were extracted from the Surveillance, Epidemiology and End Results (SEER) database. A statistical tool using beta-binomial distribution was developed to estimate the probability of an occult nodal disease is truly node-negative as a function of total number of LNs examined and T stage. The probability of falsely identifying a patient as node-negative decreased with an increasing number of nodes examined for each stage. It was estimated to be 72%, 66% and 52% for T1, T2 and T3 patients respectively with a single node examined. To confirm an occult nodal disease with 90% confidence, 5, 9, and 29 nodes need to be examined for patients from stages T1, T2, and T3, respectively. The false-negative rate of the examined lymph nodes in rectal cancer was verified to be dependent preoperatively on the clinical tumour stage. A more accurate nodal staging score was developed to recommend a threshold on the minimum number of examined nodes regarding to the favored level of confidence. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  5. The attitudinal and cognitive effects of interdisciplinary collaboration on elementary pre-service teachers development of biological science related lesson plans

    NASA Astrophysics Data System (ADS)

    Mills, Jada Jamerson

    There is a need for STEM (science, technology, engineering, and mathematics) education to be taught effectively in elementary schools. In order to achieve this, teacher preparation programs should graduate confident, content strong teachers to convey knowledge to elementary students. This study used interdisciplinary collaboration between the School of Education and the College of Liberal Arts through a Learning-by-Teaching method (LdL): Lernen durch Lernen in German. Pre-service teacher (PST) achievement levels of understanding science concepts based on pretest and posttest data, quality of lesson plans developed, and enjoyment of the class based on the collaboration with science students. The PSTs enrolled in two treatment sections of EDEL 404: Science in the Elementary Classroom collaborated with science students enrolled in BISC 327: Introductory Neuroscience to enhance their science skills and create case-based lesson plans on neurothology topics: echolocation, electrosensory reception, steroid hormones, and vocal learning. The PSTs enrolled in the single control section of EDEL 404 collaborated with fellow elementary education majors to develop lesson plans also based on the same selected topics. Qualitative interviews of education faculty, science faculty, and PSTs provided depth to the quantitative findings. Upon lesson plan completion, in-service teachers also graded the two best and two worst plans for the treatment and control sections and a science reviewer graded the plans for scientific accuracy. Statistical analyses were conducted for hypotheses, and one significant hypothesis found that PSTs who collaborated with science students had more positive science lesson plan writing attitudes than those who did not. Despite overall insignificant statistical analyses, all PSTs responded as more confident after collaboration. Additionally, interviews provided meaning and understanding to the insignificant statistical results as well as scientific accuracy of the lesson plans.

  6. Trends in Mortality After Primary Cytoreductive Surgery for Ovarian Cancer: A Systematic Review and Metaregression of Randomized Clinical Trials and Observational Studies.

    PubMed

    Di Donato, Violante; Kontopantelis, Evangelos; Aletti, Giovanni; Casorelli, Assunta; Piacenti, Ilaria; Bogani, Giorgio; Lecce, Francesca; Benedetti Panici, Pierluigi

    2017-06-01

    Primary cytoreductive surgery (PDS) followed by platinum-based chemotherapy is the cornerstone of treatment and the absence of residual tumor after PDS is universally considered the most important prognostic factor. The aim of the present analysis was to evaluate trend and predictors of 30-day mortality in patients undergoing primary cytoreduction for ovarian cancer. Literature was searched for records reporting 30-day mortality after PDS. All cohorts were rated for quality. Simple and multiple Poisson regression models were used to quantify the association between 30-day mortality and the following: overall or severe complications, proportion of patients with stage IV disease, median age, year of publication, and weighted surgical complexity index. Using the multiple regression model, we calculated the risk of perioperative mortality at different levels for statistically significant covariates of interest. Simple regression identified median age and proportion of patients with stage IV disease as statistically significant predictors of 30-day mortality. When included in the multiple Poisson regression model, both remained statistically significant, with an incidence rate ratio of 1.087 for median age and 1.017 for stage IV disease. Disease stage was a strong predictor, with the risk estimated to increase from 2.8% (95% confidence interval 2.02-3.66) for stage III to 16.1% (95% confidence interval 6.18-25.93) for stage IV, for a cohort with a median age of 65 years. Metaregression demonstrated that increased age and advanced clinical stage were independently associated with an increased risk of mortality, and the combined effects of both factors greatly increased the risk.

  7. Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods

    ERIC Educational Resources Information Center

    MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason

    2004-01-01

    The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal…

  8. Confidence limits for contribution plots in multivariate statistical process control using bootstrap estimates.

    PubMed

    Babamoradi, Hamid; van den Berg, Frans; Rinnan, Åsmund

    2016-02-18

    In Multivariate Statistical Process Control, when a fault is expected or detected in the process, contribution plots are essential for operators and optimization engineers in identifying those process variables that were affected by or might be the cause of the fault. The traditional way of interpreting a contribution plot is to examine the largest contributing process variables as the most probable faulty ones. This might result in false readings purely due to the differences in natural variation, measurement uncertainties, etc. It is more reasonable to compare variable contributions for new process runs with historical results achieved under Normal Operating Conditions, where confidence limits for contribution plots estimated from training data are used to judge new production runs. Asymptotic methods cannot provide confidence limits for contribution plots, leaving re-sampling methods as the only option. We suggest bootstrap re-sampling to build confidence limits for all contribution plots in online PCA-based MSPC. The new strategy to estimate CLs is compared to the previously reported CLs for contribution plots. An industrial batch process dataset was used to illustrate the concepts. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Confidence intervals for expected moments algorithm flood quantile estimates

    USGS Publications Warehouse

    Cohn, Timothy A.; Lane, William L.; Stedinger, Jery R.

    2001-01-01

    Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient “weighting” procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed‐form method has been available for quantifying the uncertainty of EMA‐based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood‐quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25‐ to 100‐year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

  10. Family presence during resuscitation in a paediatric hospital: health professionals' confidence and perceptions.

    PubMed

    McLean, Julie; Gill, Fenella J; Shields, Linda

    2016-04-01

    To investigate medical and nursing staff's perceptions of and self-confidence in facilitating family presence during resuscitation in a paediatric hospital setting. Family presence during resuscitation is the attendance of family members in a location that affords visual or physical contact with the patient during resuscitation. Providing the opportunity for families to be present during resuscitation embraces the family-centred care philosophy which underpins paediatric care. Having families present continues to spark much debate amongst health care professionals. A descriptive cross-sectional randomised survey using the 'Family Presence Risk/Benefit Scale' and the 'Family Presence Self-Confidence Scale 'to assess health care professionals' (doctors and nurses) perceptions and self-confidence in facilitating family presence during resuscitation of a child in a paediatric hospital. Surveys were distributed to 300 randomly selected medical and nursing staff. Descriptive and inferential statistics were used to compare medical and nursing, and critical and noncritical care perceptions and self-confidence. Critical care staff had statistically significant higher risk/benefit scores and higher self-confidence scores than those working in noncritical care areas. Having experience in paediatric resuscitation, having invited families to be present previously and a greater number of years working in paediatrics significantly affected participants' perceptions and self-confidence. There was no difference between medical and nursing mean scores for either scale. Both medical and nursing staff working in the paediatric setting understood the needs of families and the philosophy of family-centred care is a model of care practised across disciplines. This has implications both for implementing guidelines to support family presence during resuscitation and for education strategies to shift the attitudes of staff who have limited or no experience. © 2016 John Wiley & Sons Ltd.

  11. Bayesian Correlation Analysis for Sequence Count Data

    PubMed Central

    Lau, Nelson; Perkins, Theodore J.

    2016-01-01

    Evaluating the similarity of different measured variables is a fundamental task of statistics, and a key part of many bioinformatics algorithms. Here we propose a Bayesian scheme for estimating the correlation between different entities’ measurements based on high-throughput sequencing data. These entities could be different genes or miRNAs whose expression is measured by RNA-seq, different transcription factors or histone marks whose expression is measured by ChIP-seq, or even combinations of different types of entities. Our Bayesian formulation accounts for both measured signal levels and uncertainty in those levels, due to varying sequencing depth in different experiments and to varying absolute levels of individual entities, both of which affect the precision of the measurements. In comparison with a traditional Pearson correlation analysis, we show that our Bayesian correlation analysis retains high correlations when measurement confidence is high, but suppresses correlations when measurement confidence is low—especially for entities with low signal levels. In addition, we consider the influence of priors on the Bayesian correlation estimate. Perhaps surprisingly, we show that naive, uniform priors on entities’ signal levels can lead to highly biased correlation estimates, particularly when different experiments have widely varying sequencing depths. However, we propose two alternative priors that provably mitigate this problem. We also prove that, like traditional Pearson correlation, our Bayesian correlation calculation constitutes a kernel in the machine learning sense, and thus can be used as a similarity measure in any kernel-based machine learning algorithm. We demonstrate our approach on two RNA-seq datasets and one miRNA-seq dataset. PMID:27701449

  12. Magnitude of flood flows for selected annual exceedance probabilities in Rhode Island through 2010

    USGS Publications Warehouse

    Zarriello, Phillip J.; Ahearn, Elizabeth A.; Levin, Sara B.

    2012-01-01

    Heavy persistent rains from late February through March 2010 caused severe widespread flooding in Rhode Island that set or nearly set record flows and water levels at many long-term streamgages in the State. In response, the U.S. Geological Survey, in partnership with the Federal Emergency Management Agency, conducted a study to update estimates of flood magnitudes at streamgages and regional equations for estimating flood flows at ungaged locations. This report provides information needed for flood plain management, transportation infrastructure design, flood insurance studies, and other purposes that can help minimize future flood damages and risks. The magnitudes of floods were determined from the annual peak flows at 43 streamgages in Rhode Island (20 sites), Connecticut (14 sites), and Massachusetts (9 sites) using the standard Bulletin 17B log-Pearson type III method and a modification of this method called the expected moments algorithm (EMA) for 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probability (AEP) floods. Annual-peak flows were analyzed for the period of record through the 2010 water year; however, records were extended at 23 streamgages using the maintenance of variance extension (MOVE) procedure to best represent the longest period possible for determining the generalized skew and flood magnitudes. Generalized least square regression equations were developed from the flood quantiles computed at 41 streamgages (2 streamgages in Rhode Island with reported flood quantiles were not used in the regional regression because of regulation or redundancy) and their respective basin characteristics to estimate magnitude of floods at ungaged sites. Of 55 basin characteristics evaluated as potential explanatory variables, 3 were statistically significant—drainage area, stream density, and basin storage. The pseudo-coefficient of determination (pseudo-R2) indicates these three explanatory variables explain 95 to 96 percent of the variance in the flood magnitudes from 20- to 0.2-percent AEPs. Estimates of uncertainty of the at-site and regression flood magnitudes are provided and were combined with their respective estimated flood quantiles to improve estimates of flood flows at streamgages. This region has a long history of urban development, which is considered to have an important effect on flood flows. This study includes basins that have an impervious area ranging from 0.5 to 37 percent. Although imperviousness provided some explanatory power in the regression, it was not statistically significant at the 95-percent confidence level for any of the AEPs examined. Influence of urbanization on flood flows indicates a complex interaction with other characteristics that confounds a statistical explanation of its effects. Standard methods for calculating magnitude of floods for given AEP are based on the assumption of stationarity, that is, the annual peak flows exhibit no significant trend over time. A subset of 16 streamgages with 70 or more years of unregulated systematic record indicates all but 4 streamgages have a statistically significant positive trend at the 95-percent confidence level; three of these are statistically significant at about the 90-percent confidence level or above. If the trend continues linearly in time, the estimated magnitude of floods for any AEP, on average, will increase by 6, 13, and 21 percent in 10, 20, and 30 years' time, respectively. In 2010, new peaks of record were set at 18 of the 21 active streamgages in Rhode Island. The updated flood frequency analysis indicates the peaks at these streamgages ranged from 2- to 0.2-percent AEP. Many streamgages in the State peaked at a 0.5- and 0.2-percent AEP, except for streamgages in the Blackstone River Basin, which peaked from a 4- to 2-percent AEP.

  13. A method for meta-analysis of epidemiological studies.

    PubMed

    Einarson, T R; Leeder, J S; Koren, G

    1988-10-01

    This article presents a stepwise approach for conducting a meta-analysis of epidemiological studies based on proposed guidelines. This systematic method is recommended for practitioners evaluating epidemiological studies in the literature to arrive at an overall quantitative estimate of the impact of a treatment. Bendectin is used as an illustrative example. Meta-analysts should establish a priori the purpose of the analysis and a complete protocol. This protocol should be adhered to, and all steps performed should be recorded in detail. To aid in developing such a protocol, we present methods the researcher can use to perform each of 22 steps in six major areas. The illustrative meta-analysis confirmed previous traditional narrative literature reviews that Bendectin is not related to teratogenic outcomes in humans. The overall summary odds ratio was 1.01 (chi 2 = 0.05, p = 0.815) with a 95 percent confidence interval of 0.66-1.55. When the studies were separated according to study type, the summary odds ratio for cohort studies was 0.95 with a 95 percent confidence interval of 0.62-1.45. For case-control studies, the summary odds ratio was 1.27 with a 95 percent confidence interval of 0.83-1.94. The corresponding chi-square values were not statistically significant at the p = 0.05 level.

  14. Promoting Best Practices for Managing Acute Low Back Pain in an Occupational Environment.

    PubMed

    Slaughter, Amanda Lynn; Frith, Karen; O'Keefe, Louise; Alexander, Susan; Stoll, Regina

    2015-09-01

    Providers treating low back pain must be confident and knowledgeable in evidence-based practice (EBP) to provide the best outcomes. An online education course was created in an effort to increase knowledge and confidence in EBP and clinical practice guidelines specific to low back pain in an occupational setting. There were 80 participants who completed the pre-test and post-test. The results showed a statistically significant improvement in knowledge and confidence scores after completion of the course. An online education course was shown to be a cost-effective, accessible tool to increase knowledge and confidence of EBP for different health care providers. © 2015 The Author(s).

  15. Sources of sport confidence, imagery type and performance among competitive athletes: the mediating role of sports confidence.

    PubMed

    Levy, A R; Perry, J; Nicholls, A R; Larkin, D; Davies, J

    2015-01-01

    This study explored the mediating role of sport confidence upon (1) sources of sport confidence-performance relationship and (2) imagery-performance relationship. Participants were 157 competitive athletes who completed state measures of confidence level/sources, imagery type and performance within one hour after competition. Among the current sample, confirmatory factor analysis revealed appropriate support for the nine-factor SSCQ and the five-factor SIQ. Mediational analysis revealed that sport confidence had a mediating influence upon the achievement source of confidence-performance relationship. In addition, both cognitive and motivational imagery types were found to be important sources of confidence, as sport confidence mediated imagery type- performance relationship. Findings indicated that athletes who construed confidence from their own achievements and report multiple images on a more frequent basis are likely to benefit from enhanced levels of state sport confidence and subsequent performance.

  16. Rasch analyses of the Activities-specific Balance Confidence Scale with individuals 50 years and older with lower limb amputations

    PubMed Central

    Sakakibara, Brodie M.; Miller, William C.; Backman, Catherine L.

    2012-01-01

    Objective To explore shortened response formats for use with the Activities-specific Balance Confidence scale and then: 1) evaluate the unidimensionality of the scale; 2) evaluate the item difficulty; 3) evaluate the scale for redundancy and content gaps; and 4) evaluate the item standard error of measurement (SEM) and internal consistency reliability among aging individuals (≥50 years) with a lower-limb amputation living in the community. Design Secondary analysis of cross-sectional survey and chart review data. Setting Out-patient amputee clinics, Ontario, Canada. Participants Four hundred forty eight community living adults, at least 50 years old (mean = 68 years), who have used a prosthesis for at least 6 months for a major unilateral lower limb amputation. Three hundred twenty five (72.5%) were men. Intervention N/a Main Outcome Measure(s) Activities-specific Balance Confidence Scale. Results A 5-option response format outperformed 4- and 6-option formats. Factor analyses confirmed a unidimensional scale. The distance between response options is not the same for all items on the scale, evident by the Partial Credit Model (PCM) having a better fit to the data than the Rating Scale Model. Two items, however, did not fit the PCM within statistical reason. Revising the wording of the two items may resolve the misfit, and improve the construct validity and lower the SEM. Overall, the difficulty of the scale’s items is appropriate for use with aging individuals with lower-limb amputation, and is most reliable (Cronbach ∝ = 0.94) for use with individuals with moderately low balance confidence levels. Conclusions The ABC-scale with a simplified 5-option response format is a valid and reliable measure of balance confidence for use with individuals aging with a lower limb amputation. PMID:21704978

  17. First measurement of muon-neutrino disappearance in NOvA

    NASA Astrophysics Data System (ADS)

    Adamson, P.; Ader, C.; Andrews, M.; Anfimov, N.; Anghel, I.; Arms, K.; Arrieta-Diaz, E.; Aurisano, A.; Ayres, D. S.; Backhouse, C.; Baird, M.; Bambah, B. A.; Bays, K.; Bernstein, R.; Betancourt, M.; Bhatnagar, V.; Bhuyan, B.; Bian, J.; Biery, K.; Blackburn, T.; Bocean, V.; Bogert, D.; Bolshakova, A.; Bowden, M.; Bower, C.; Broemmelsiek, D.; Bromberg, C.; Brunetti, G.; Bu, X.; Butkevich, A.; Capista, D.; Catano-Mur, E.; Chase, T. R.; Childress, S.; Choudhary, B. C.; Chowdhury, B.; Coan, T. E.; Coelho, J. A. B.; Colo, M.; Cooper, J.; Corwin, L.; Cronin-Hennessy, D.; Cunningham, A.; Davies, G. S.; Davies, J. P.; Del Tutto, M.; Derwent, P. F.; Deepthi, K. N.; Demuth, D.; Desai, S.; Deuerling, G.; Devan, A.; Dey, J.; Dharmapalan, R.; Ding, P.; Dixon, S.; Djurcic, Z.; Dukes, E. C.; Duyang, H.; Ehrlich, R.; Feldman, G. J.; Felt, N.; Fenyves, E. J.; Flumerfelt, E.; Foulkes, S.; Frank, M. J.; Freeman, W.; Gabrielyan, M.; Gallagher, H. R.; Gebhard, M.; Ghosh, T.; Gilbert, W.; Giri, A.; Goadhouse, S.; Gomes, R. A.; Goodenough, L.; Goodman, M. C.; Grichine, V.; Grossman, N.; Group, R.; Grudzinski, J.; Guarino, V.; Guo, B.; Habig, A.; Handler, T.; Hartnell, J.; Hatcher, R.; Hatzikoutelis, A.; Heller, K.; Howcroft, C.; Huang, J.; Huang, X.; Hylen, J.; Ishitsuka, M.; Jediny, F.; Jensen, C.; Jensen, D.; Johnson, C.; Jostlein, H.; Kafka, G. K.; Kamyshkov, Y.; Kasahara, S. M. S.; Kasetti, S.; Kephart, K.; Koizumi, G.; Kotelnikov, S.; Kourbanis, I.; Krahn, Z.; Kravtsov, V.; Kreymer, A.; Kulenberg, Ch.; Kumar, A.; Kutnink, T.; Kwarciancy, R.; Kwong, J.; Lang, K.; Lee, A.; Lee, W. M.; Lee, K.; Lein, S.; Liu, J.; Lokajicek, M.; Lozier, J.; Lu, Q.; Lucas, P.; Luchuk, S.; Lukens, P.; Lukhanin, G.; Magill, S.; Maan, K.; Mann, W. A.; Marshak, M. L.; Martens, M.; Martincik, J.; Mason, P.; Matera, K.; Mathis, M.; Matveev, V.; Mayer, N.; McCluskey, E.; Mehdiyev, R.; Merritt, H.; Messier, M. D.; Meyer, H.; Miao, T.; Michael, D.; Mikheyev, S. P.; Miller, W. H.; Mishra, S. R.; Mohanta, R.; Moren, A.; Mualem, L.; Muether, M.; Mufson, S.; Musser, J.; Newman, H. B.; Nelson, J. K.; Niner, E.; Norman, A.; Nowak, J.; Oksuzian, Y.; Olshevskiy, A.; Oliver, J.; Olson, T.; Paley, J.; Pandey, P.; Para, A.; Patterson, R. B.; Pawloski, G.; Pearson, N.; Perevalov, D.; Pershey, D.; Peterson, E.; Petti, R.; Phan-Budd, S.; Piccoli, L.; Pla-Dalmau, A.; Plunkett, R. K.; Poling, R.; Potukuchi, B.; Psihas, F.; Pushka, D.; Qiu, X.; Raddatz, N.; Radovic, A.; Rameika, R. A.; Ray, R.; Rebel, B.; Rechenmacher, R.; Reed, B.; Reilly, R.; Rocco, D.; Rodkin, D.; Ruddick, K.; Rusack, R.; Ryabov, V.; Sachdev, K.; Sahijpal, S.; Sahoo, H.; Samoylov, O.; Sanchez, M. C.; Saoulidou, N.; Schlabach, P.; Schneps, J.; Schroeter, R.; Sepulveda-Quiroz, J.; Shanahan, P.; Sherwood, B.; Sheshukov, A.; Singh, J.; Singh, V.; Smith, A.; Smith, D.; Smolik, J.; Solomey, N.; Sotnikov, A.; Sousa, A.; Soustruznik, K.; Stenkin, Y.; Strait, M.; Suter, L.; Talaga, R. L.; Tamsett, M. C.; Tariq, S.; Tas, P.; Tesarek, R. J.; Thayyullathil, R. B.; Thomsen, K.; Tian, X.; Tognini, S. C.; Toner, R.; Trevor, J.; Tzanakos, G.; Urheim, J.; Vahle, P.; Valerio, L.; Vinton, L.; Vrba, T.; Waldron, A. V.; Wang, B.; Wang, Z.; Weber, A.; Wehmann, A.; Whittington, D.; Wilcer, N.; Wildberger, R.; Wildman, D.; Williams, K.; Wojcicki, S. G.; Wood, K.; Xiao, M.; Xin, T.; Yadav, N.; Yang, S.; Zadorozhnyy, S.; Zalesak, J.; Zamorano, B.; Zhao, A.; Zirnstein, J.; Zwaska, R.; NOvA Collaboration

    2016-03-01

    This paper reports the first measurement using the NOvA detectors of νμ disappearance in a νμ beam. The analysis uses a 14 kton-equivalent exposure of 2.74 ×1020 protons-on-target from the Fermilab NuMI beam. Assuming the normal neutrino mass hierarchy, we measure Δ m322=(2.52-0.18+0.20)×10-3 eV2 and sin2θ23 in the range 0.38-0.65, both at the 68% confidence level, with two statistically degenerate best-fit points at sin2θ23=0.43 and 0.60. Results for the inverted mass hierarchy are also presented.

  18. One- to two-month oscillations in SSMI surface wind speed in western tropical Pacific Ocean

    NASA Technical Reports Server (NTRS)

    Collins, Michael L.; Stanford, John L.; Halpern, David

    1994-01-01

    The 10-m wind speed over the ocean can be estimated from microwave brightness temperature measurements recorded by the Special Sensor Microwave Imager (SSMI) instrument mounted on a polar-orbiting spacecraft. Four-year (1988-1991) time series of average daily 1 deg x 1 deg SSMI wind speeds were analyzed at selected sites in the western tropical Pacific Ocean. One- to two-month period wind speed oscillations with amplitudes statistically significant at the 95% confidence level were observed near Kanton, Eniwetok, Guam, and Truk. This is the first report of such an oscillation in SSMI wind speeds.

  19. Results of a real-time irradiation of lithium P/N and conventional N/P silicon solar cells.

    NASA Technical Reports Server (NTRS)

    Reynard, D. L.; Peterson, D. G.

    1972-01-01

    Eight types of lithium-diffused P/N and three types of conventional 10 ohm-cm N/P silicon solar cells were irradiated at four different temperatures with a strontium-90 radioisotope at a rate typical of that expected in earth orbit. The six-month irradiation confirmed earlier accelerator results, showed that certain cell types outperform others at the various temperatures, and, in general, verified the recent improvements and potential usefulness of lithium solar cells. The experimental approach and statistical methods and analyses employed yielded increased confidence in the validity of the results. Injection level effects were observed to be significant.

  20. Joint Direct Attack Munition (JDAM)

    DTIC Science & Technology

    2015-12-01

    February 19, 2015 and the O&S cost are based on an ICE dated August 28, 2014 Confidence Level Confidence Level of cost estimate for current APB: 50% A...mathematically derived confidence level was not computed for this Life-Cycle Cost Estimate (LCCE). This LCCE represents the expected value, taking into...consideration relevant risks, including ordinary levels of external and unforeseen events. It aims to provide sufficient resources to execute the

  1. [Chronic low back pain and associated risk factors, in patients with social security medical attention: A case-control study].

    PubMed

    Durán-Nah, Jaime Jesús; Benítez-Rodríguez, Carlos René; Miam-Viana, Emilio Jesús

    2016-01-01

    Chronic low back pain (CLBP) is frequently seen in the orthopedic outpatient consultation. The aim of this paper is to identify risk factors associated with CLBP in patients cared for during the year 2012, at a General Hospital belonging to Instituto Mexicano del Seguro Social, in Yucatán, Mexico. Data of 95 patients with CLBP (cases) was compared with data of 190 patients without CLBP (controls) using a binary logistic model (BLM), from which odd ratios (OR) and 95 % confidence intervals (95 % CI) were obtained. School level, body mass index (BMI) as a continuous variable, story of heavy weight lifting, some types of comorbidities and dyslipidemia, were identified as statistically significant in the bivariate analysis (p ≤ 0.05 each). In a second step, secondary school level (OR 0.25, 95 % CI: 0.08-0.81), dyslipidemia (OR 0.26, 95 % CI: 0.12-0.56), heavy weights lifting (OR 0.22, 95 % CI: 0.12-0.42), and BMI (OR 1.22, 95 % CI: 1.12-1.32) were all identified by the BLM as statistically significant. In this sample, secondary school level, dislipidemia and heavy weights lifting reduced the risk of CLBP, while the BMI increased the risk.

  2. Additive interaction between heterogeneous environmental ...

    EPA Pesticide Factsheets

    BACKGROUND Environmental exposures often occur in tandem; however, epidemiological research often focuses on singular exposures. Statistical interactions among broad, well-characterized environmental domains have not yet been evaluated in association with health. We address this gap by conducting a county-level cross-sectional analysis of interactions between Environmental Quality Index (EQI) domain indices on preterm birth in the Unites States from 2000-2005.METHODS: The EQI, a county-level index constructed for the 2000-2005 time period, was constructed from five domain-specific indices (air, water, land, built and sociodemographic) using principal component analyses. County-level preterm birth rates (n=3141) were estimated using live births from the National Center for Health Statistics. Linear regression was used to estimate prevalence differences (PD) and 95% confidence intervals (CI) comparing worse environmental quality to the better quality for each model for a) each individual domain main effect b) the interaction contrast and c) the two main effects plus interaction effect (i.e. the “net effect”) to show departure from additive interaction for the all U.S counties. Analyses were also performed for subgroupings by four urban/rural strata. RESULTS: We found the suggestion of antagonistic interactions but no synergism, along with several purely additive (i.e., no interaction) associations. In the non-stratified model, we observed antagonistic interac

  3. Permutation-based inference for the AUC: A unified approach for continuous and discontinuous data.

    PubMed

    Pauly, Markus; Asendorf, Thomas; Konietschke, Frank

    2016-11-01

    We investigate rank-based studentized permutation methods for the nonparametric Behrens-Fisher problem, that is, inference methods for the area under the ROC curve. We hereby prove that the studentized permutation distribution of the Brunner-Munzel rank statistic is asymptotically standard normal, even under the alternative. Thus, incidentally providing the hitherto missing theoretical foundation for the Neubert and Brunner studentized permutation test. In particular, we do not only show its consistency, but also that confidence intervals for the underlying treatment effects can be computed by inverting this permutation test. In addition, we derive permutation-based range-preserving confidence intervals. Extensive simulation studies show that the permutation-based confidence intervals appear to maintain the preassigned coverage probability quite accurately (even for rather small sample sizes). For a convenient application of the proposed methods, a freely available software package for the statistical software R has been developed. A real data example illustrates the application. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Nitrate in drinking water and risk of death from pancreatic cancer in Taiwan.

    PubMed

    Yang, Chun-Yuh; Tsai, Shang-Shyue; Chiu, Hui-Fen

    2009-01-01

    The relationship between nitrate levels in drinking water and risk of pancreatic cancer development remains inconclusive. A matched case-control and nitrate ecology study was used to investigate the association between mortality attributed to pancreatic cancer and nitrate exposure from Taiwan's drinking water. All pancreatic cancer deaths of Taiwan residents from 2000 through 2006 were obtained from the Bureau of Vital Statistics of the Taiwan Provincial Department of Health. Controls were deaths from other causes and were pair-matched to the cases by gender, year of birth, and year of death. Each matched control was selected randomly from the set of possible controls for each case. Data on nitrate-nitrogen (NO(3)-N) levels of drinking water throughout Taiwan were collected from Taiwan Water Supply Corporation (TWSC). The municipality of residence for cancer cases and controls was assumed to be the source of the subject's nitrate exposure via drinking water. The adjusted odds ratios and confidence limits for pancreatic cancer death for those with high nitrate levels in their drinking water, as compared to the lowest tertile, were 1.03 (0.9-1.18) and 1.1 (0.96-1.27), respectively. The results of the present study show that there was no statistically significant association between the levels of nitrate in drinking water and increased risk of death from pancreatic cancer.

  5. Interlaboratory study of free cyanide methods compared to total cyanide measurements and the effect of preservation with sodium hydroxide for secondary- and tertiary-treated waste water samples.

    PubMed

    Stanley, Brett J; Antonio, Karen

    2012-11-01

    Several methods exist for the measurement of cyanide levels in treated wastewater,typically requiring preservation of the sample with sodium hydroxide to minimize loss of hydrogen cyanide gas (HCN). Recent reports have shown that cyanide levels may increase with chlorination or preservation. In this study, three flow injection analysis methods involving colorimetric and amperometric detection were compared within one laboratory, as well as across separate laboratories and equipment. Split wastewater samples from eight facilities and three different sampling periods were tested. An interlaboratory confidence interval of 3.5 ppb was calculated compared with the intralaboratory reporting limit of 2 ppb. The results show that free cyanide measurements are not statistically different than total cyanide levels. An artificial increase in cyanide level is observed with all methods for preserved samples relative to nonpreserved samples, with an average increase of 2.3 ppb. The possible loss of cyanide without preservation is shown to be statistically insignificant if properly stored up to 48 hours. The cyanide increase with preservation is further substantiated with the method of standard additions and is not a matrix interference. The increase appears to be correlated with the amount of cyanide observed without preservation, which appears to be greater in those facilities that disinfect their wastewater with chlorine, followed by dechlorination with sodium bisulfite.

  6. More accurate, calibrated bootstrap confidence intervals for correlating two autocorrelated climate time series

    NASA Astrophysics Data System (ADS)

    Olafsdottir, Kristin B.; Mudelsee, Manfred

    2013-04-01

    Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.

  7. Calcium and phosphorus regulatory hormones and risk of incident symptomatic kidney stones.

    PubMed

    Taylor, Eric N; Hoofnagle, Andrew N; Curhan, Gary C

    2015-04-07

    Calcium and phosphorus regulatory hormones may contribute to the pathogenesis of calcium nephrolithiasis. However, there has been no prospective study to date of plasma hormone levels and risk of kidney stones. This study aimed to examine independent associations between plasma levels of 1,25-dihydroxyvitamin D (1,25[OH]2D), 25-hydroxyvitamin D, 24,25-dihydroxyvitamin D, fibroblast growth factor 23 (FGF23), parathyroid hormone, calcium, phosphate, and creatinine and the subsequent risk of incident kidney stones. This study was a prospective, nested case-control study of men in the Health Professionals Follow-Up Study who were free of diagnosed nephrolithiasis at blood draw. During 12 years of follow-up, 356 men developed an incident symptomatic kidney stone. Using risk set sampling, controls were selected in a 2:1 ratio (n=712 controls) and matched for age, race, and year, month, and time of day of blood collection. Baseline plasma levels of 25-hydroxyvitamin D, 24,25-dihydroxyvitamin D, parathyroid hormone, calcium, phosphate, and creatinine were similar in cases and controls. Mean 1,25(OH)2D and median FGF23 levels were higher in cases than controls but differences were small and statistically nonsignificant (45.7 versus 44.2 pg/ml, P=0.07 for 1,25[OH]2D; 47.6 versus 45.1 pg/ml, P=0.08 for FGF23). However, after adjusting for body mass index, diet, plasma factors, and other covariates, the odds ratios of incident symptomatic kidney stones in the highest compared with lowest quartiles were 1.73 (95% confidence interval, 1.11 to 2.71; P for trend 0.01) for 1,25(OH)2D and 1.45 (95% confidence interval, 0.96 to 2.19; P for trend 0.03) for FGF23. There were no significant associations between other plasma factors and kidney stone risk. Higher plasma 1,25(OH)2D, even in ranges considered normal, is independently associated with higher risk of symptomatic kidney stones. Although of borderline statistical significance, these findings also suggest that higher FGF23 may be associated with risk. Copyright © 2015 by the American Society of Nephrology.

  8. A comparison of Probability Of Detection (POD) data determined using different statistical methods

    NASA Astrophysics Data System (ADS)

    Fahr, A.; Forsyth, D.; Bullock, M.

    1993-12-01

    Different statistical methods have been suggested for determining probability of detection (POD) data for nondestructive inspection (NDI) techniques. A comparative assessment of various methods of determining POD was conducted using results of three NDI methods obtained by inspecting actual aircraft engine compressor disks which contained service induced cracks. The study found that the POD and 95 percent confidence curves as a function of crack size as well as the 90/95 percent crack length vary depending on the statistical method used and the type of data. The distribution function as well as the parameter estimation procedure used for determining POD and the confidence bound must be included when referencing information such as the 90/95 percent crack length. The POD curves and confidence bounds determined using the range interval method are very dependent on information that is not from the inspection data. The maximum likelihood estimators (MLE) method does not require such information and the POD results are more reasonable. The log-logistic function appears to model POD of hit/miss data relatively well and is easy to implement. The log-normal distribution using MLE provides more realistic POD results and is the preferred method. Although it is more complicated and slower to calculate, it can be implemented on a common spreadsheet program.

  9. Piloting a Sex-Specific, Technology-Enhanced, Active Learning Intervention for Stroke Prevention in Women.

    PubMed

    Dirickson, Amanda; Stutzman, Sonja E; Alberts, Mark J; Novakovic, Roberta L; Stowe, Ann M; Beal, Claudia C; Goldberg, Mark P; Olson, DaiWai M

    2017-12-01

    Recent studies reveal deficiencies in stroke awareness and knowledge of risk factors among women. Existing stroke education interventions may not address common and sex-specific risk factors in the population with the highest stroke-related rate of mortality. This pilot study assessed the efficacy of a technology-enhanced, sex-specific educational program ("SISTERS") for women's knowledge of stroke. This was an experimental pretest-posttest design. The sample consisted of 150 women (mean age, 55 years) with at least 1 stroke risk factor. Participants were randomized to either the intervention (n = 75) or control (n = 75) group. Data were collected at baseline and at a 2-week posttest. There was no statistically significant difference in mean knowledge score (P = .67), mean confidence score (P = .77), or mean accuracy score (P = .75) between the intervention and control groups at posttest. Regression analysis revealed that older age was associated with lower knowledge scores (P < .001) and lower confidence scores (P < .001). After controlling for age, the SISTERS program was associated with a statistically significant difference in knowledge (P < .001) and confidence (P < .001). Although no change occurred overall, after controlling for age, there was a statistically significant benefit. Older women may have less comfort with technology and require consideration for cognitive differences.

  10. Variance estimates and confidence intervals for the Kappa measure of classification accuracy

    Treesearch

    M. A. Kalkhan; R. M. Reich; R. L. Czaplewski

    1997-01-01

    The Kappa statistic is frequently used to characterize the results of an accuracy assessment used to evaluate land use and land cover classifications obtained by remotely sensed data. This statistic allows comparisons of alternative sampling designs, classification algorithms, photo-interpreters, and so forth. In order to make these comparisons, it is...

  11. I Am 95% Confident That the Earth is Round: An Interview about Statistics with Chris Spatz.

    ERIC Educational Resources Information Center

    Dillon, Kathleen M.

    1999-01-01

    Presents an interview with Chris Spatz who is a professor of psychology at Hendrix College in Conway (Arkansas). Discusses the null hypothesis statistical texts (NHST) and the arguments for and against the use of NHST, the changes in research articles, textbook changes, and the Internet. (CMK)

  12. Senior citizens as rescuers: Is reduced knowledge the reason for omitted lay-resuscitation-attempts? Results from a representative survey with 2004 interviews.

    PubMed

    Brinkrolf, Peter; Bohn, Andreas; Lukas, Roman-Patrik; Heyse, Marko; Dierschke, Thomas; Van Aken, Hugo Karel; Hahnenkamp, Klaus

    2017-01-01

    Resuscitation (CPR) provided by a bystander prior to the arrival of the emergency services is a beneficial factor for surviving a cardiac arrest (CA). Our registry-based data show, that older patients receive bystander-CPR less frequently. Little is known on possible reasons for this finding. We sought to investigate the hypothesis that awareness of CPR measures is lower in older laypersons being a possible reason for less CPR-attempts in senior citizens. 1206 datasets on bystander resuscitations actually carried out were analyzed for age-dependent differences. Subsequently, we investigated whether the knowledge required carrying out bystander-CPR and the self-confidence to do so differ between younger and older citizens using computer-assisted telephone interviewing. 2004 interviews were performed and statistically analyzed. A lower level of knowledge to carry out bystander-CPR was seen in older individuals. For example, 82.4% of interviewees under 65 years of age, knew the correct emergency number. In this group, 66.6% named CPR as the relevant procedure in CA. Among older individuals these responses were only given by 75.1% and 49.5% (V = 0.082; P < 0.001 and V = 0.0157; P < 0.001). Additionally, a difference concerning participants' confidence in their own abilities was detectable. 58.0% of the persons younger than 65 years were confident that they would detect a CA in comparison to 44.6% of the participants older than 65 years (V = 0.120; P < 0.001). Similarly, 62.7% of the interviewees younger than 65 were certain to know what to do during CPR compared to 51.3% of the other group (V = 0.103; P < 0.001). Lower levels of older bystanders' knowledge and self-confidence might provide an explanation for why older patients receive bystander-CPR less frequently. Further investigation is necessary to identify causal connections and optimum ways to empower bystander resuscitation.

  13. Senior citizens as rescuers: Is reduced knowledge the reason for omitted lay-resuscitation-attempts? Results from a representative survey with 2004 interviews

    PubMed Central

    Lukas, Roman-Patrik; Heyse, Marko; Dierschke, Thomas; Van Aken, Hugo Karel; Hahnenkamp, Klaus

    2017-01-01

    Objective Resuscitation (CPR) provided by a bystander prior to the arrival of the emergency services is a beneficial factor for surviving a cardiac arrest (CA). Our registry-based data show, that older patients receive bystander-CPR less frequently. Little is known on possible reasons for this finding. We sought to investigate the hypothesis that awareness of CPR measures is lower in older laypersons being a possible reason for less CPR-attempts in senior citizens. Methods 1206 datasets on bystander resuscitations actually carried out were analyzed for age-dependent differences. Subsequently, we investigated whether the knowledge required carrying out bystander-CPR and the self-confidence to do so differ between younger and older citizens using computer-assisted telephone interviewing. 2004 interviews were performed and statistically analyzed. Results A lower level of knowledge to carry out bystander-CPR was seen in older individuals. For example, 82.4% of interviewees under 65 years of age, knew the correct emergency number. In this group, 66.6% named CPR as the relevant procedure in CA. Among older individuals these responses were only given by 75.1% and 49.5% (V = 0.082; P < 0.001 and V = 0.0157; P < 0.001). Additionally, a difference concerning participants’ confidence in their own abilities was detectable. 58.0% of the persons younger than 65 years were confident that they would detect a CA in comparison to 44.6% of the participants older than 65 years (V = 0.120; P < 0.001). Similarly, 62.7% of the interviewees younger than 65 were certain to know what to do during CPR compared to 51.3% of the other group (V = 0.103; P < 0.001). Conclusions Lower levels of older bystanders' knowledge and self-confidence might provide an explanation for why older patients receive bystander-CPR less frequently. Further investigation is necessary to identify causal connections and optimum ways to empower bystander resuscitation. PMID:28604793

  14. Preservice Educators' Confidence in Addressing Sexuality Education

    ERIC Educational Resources Information Center

    Wyatt, Tammy Jordan

    2009-01-01

    This study examined 328 preservice educators' level of confidence in addressing four sexuality education domains and 21 sexuality education topics. Significant differences in confidence levels across the four domains were found for gender, academic major, sexuality education philosophy, and sexuality education knowledge. Preservice educators…

  15. Obstetric training in Emergency Medicine: a needs assessment.

    PubMed

    Janicki, Adam James; MacKuen, Courteney; Hauspurg, Alisse; Cohn, Jamieson

    2016-01-01

    Identification and management of obstetric emergencies is essential in emergency medicine (EM), but exposure to pregnant patients during EM residency training is frequently limited. To date, there is little data describing effective ways to teach residents this material. Current guidelines require completion of 2 weeks of obstetrics or 10 vaginal deliveries, but it is unclear whether this instills competency. We created a 15-item survey evaluating resident confidence and knowledge related to obstetric emergencies. To assess confidence, we asked residents about their exposure and comfort level regarding obstetric emergencies and eight common presentations and procedures. We assessed knowledge via multiple-choice questions addressing common obstetric presentations, pelvic ultrasound image, and cardiotocography interpretation. The survey was distributed to residency programs utilizing the Council of Emergency Medicine Residency Directors (CORD) listserv. The survey was completed by 212 residents, representing 55 of 204 (27%) programs belonging to CORD and 11.2% of 1,896 eligible residents. Fifty-six percent felt they had adequate exposure to obstetric emergencies. The overall comfort level was 2.99 (1-5 scale) and comfort levels of specific presentations and procedures ranged from 2.58 to 3.97; all increased moderately with postgraduate year (PGY) level. Mean overall percentage of items answered correctly on the multiple-choice questions was 58% with no statistical difference by PGY level. Performance on individual questions did not differ by PGY level. The identification and management of obstetric emergencies is the cornerstone of EM. We found preliminary evidence of a concerning lack of resident comfort regarding obstetric conditions and knowledge deficits on core obstetrics topics. EM residents may benefit from educational interventions to increase exposure to these topics.

  16. Advanced Extremely High Frequency Satellite (AEHF)

    DTIC Science & Technology

    2015-12-01

    control their tactical and strategic forces at all levels of conflict up to and including general nuclear war, and it supports the attainment of...10195.1 10622.2 Confidence Level Confidence Level of cost estimate for current APB: 50% The ICE) that supports the AEHF SV 1-4, like all life-cycle cost...mathematically the precise confidence levels associated with life-cycle cost estimates prepared for MDAPs. Based on the rigor in methods used in building

  17. Wideband Global SATCOM (WGS)

    DTIC Science & Technology

    2015-12-01

    system level testing. ​The WGS-6 financial data is not reported in this SAR because funding is provided by Australia in exchange for access to a...A 3831.3 3539.7 3539.7 3801.9 Confidence Level Confidence Level of cost estimate for current APB: 50% The ICE to support WGS Milestone C decision...to calculate mathematically the precise confidence levels associated with life-cycle cost estimates prepared for MDAPs. Based on the rigor in

  18. Gender-, age-, and race/ethnicity-based differential item functioning analysis of the movement disorder society-sponsored revision of the Unified Parkinson's disease rating scale.

    PubMed

    Goetz, Christopher G; Liu, Yuanyuan; Stebbins, Glenn T; Wang, Lu; Tilley, Barbara C; Teresi, Jeanne A; Merkitch, Douglas; Luo, Sheng

    2016-12-01

    Assess MDS-UPDRS items for gender-, age-, and race/ethnicity-based differential item functioning. Assessing differential item functioning is a core rating scale validation step. For the MDS-UPDRS, differential item functioning occurs if item-score probability among people with similar levels of parkinsonism differ according to selected covariates (gender, age, race/ethnicity). If the magnitude of differential item functioning is clinically relevant, item-score interpretation must consider influences by these covariates. Differential item functioning can be nonuniform (covariate variably influences an item-score across different levels of parkinsonism) or uniform (covariate influences an item-score consistently over all levels of parkinsonism). Using the MDS-UPDRS translation database of more than 5,000 PD patients from 14 languages, we tested gender-, age-, and race/ethnicity-based differential item functioning. To designate an item as having clinically relevant differential item functioning, we required statistical confirmation by 2 independent methods, along with a McFadden pseudo-R 2 magnitude statistic greater than "negligible." Most items showed no gender-, age- or race/ethnicity-based differential item functioning. When differential item functioning was identified, the magnitude statistic was always in the "negligible" range, and the scale-level impact was minimal. The absence of clinically relevant differential item functioning across all items and all parts of the MDS-UPDRS is strong evidence that the scale can be used confidently. As studies of Parkinson's disease increasingly involve multinational efforts and the MDS-UPDRS has several validated non-English translations, the findings support the scale's broad applicability in populations with varying gender, age, and race/ethnicity distributions. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.

  19. Serum Endotoxins and Flagellin and Risk of Colorectal Cancer in the European Prospective Investigation into Cancer and Nutrition (EPIC) Cohort

    PubMed Central

    Kong, So Yeon; Tran, Hao Quang; Gewirtz, Andrew T.; McKeown-Eyssen, Gail; Fedirko, Veronika; Romieu, Isabelle; Tjønneland, Anne; Olsen, Anja; Overvad, Kim; Boutron-Ruault, Marie-Christine; Bastide, Nadia; Affret, Aurélie; Kühn, Tilman; Kaaks, Rudolf; Boeing, Heiner; Aleksandrova, Krasimira; Trichopoulou, Antonia; Kritikou, Maria; Vasilopoulou, Effie; Palli, Domenico; Krogh, Vittorio; Mattiello, Amalia; Tumino, Rosario; Naccarati, Alessio; Bueno-de-Mesquita, H.Bas; Peeters, Petra H.; Weiderpass, Elisabete; Quirós, J. Ramón; Sala, Núria; Sánchez, María-José; Huerta Castaño, José María; Barricarte, Aurelio; Dorronsoro, Miren; Werner, Mårten; Wareham, Nicholas J.; Khaw, Kay-Tee; Bradbury, Kathryn E.; Freisling, Heinz; Stavropoulou, Faidra; Ferrari, Pietro; Gunter, Marc J.; Cross, Amanda J.; Riboli, Elio; Bruce, W. Robert

    2017-01-01

    Background Chronic inflammation and oxidative stress are thought to be involved in colorectal cancer (CRC) development. These processes may be contributed to by leakage of bacterial products, such as lipopolysaccharide (LPS) and flagellin, across the gut barrier. The objective of this study, nested within a prospective cohort, was to examine associations between circulating LPS and flagellin serum antibody levels and CRC risk. Methods 1,065 incident CRC cases (colon n=667; rectal n=398) were matched (1:1) to control subjects. Serum flagellin- and LPS-specific IgA and IgG levels were quantitated by ELISA. Multivariable conditional logistic regression models were used to calculate odds ratios (OR) and 95% confidence intervals (CI), adjusting for multiple relevant confouding factors. Results Overall, elevated anti-LPS and anti-flagellin biomarker levels were not associated with CRC risk. After testing potential interactions by various factors relevant for CRC risk and anti-LPS and anti-flagellin, sex was identified as a statistically significant interaction factor (pinteraction < 0.05 for all the biomarkers). Analyses stratified by sex showed a statistically significant positive CRC risk association for men (fully-adjusted OR for highest vs. lowest quartile for total anti-LPS+flagellin = 1.66; 95% CI, 1.10-2.51; ptrend = 0.049) while a borderline statistically significant inverse association was observed for women (fully-adjusted OR= 0.70; 95%CI, 0.47-1.02; ptrend = 0.18). Conclusion In this prospective study on European populations, we found bacterial exposure levels to be positively associated to CRC risk among men while in women, a possible inverse association may exist. Impact Further studies are warranted to better clarify these preliminary observations. PMID:26823475

  20. Spectral and cross-spectral analysis of uneven time series with the smoothed Lomb-Scargle periodogram and Monte Carlo evaluation of statistical significance

    NASA Astrophysics Data System (ADS)

    Pardo-Igúzquiza, Eulogio; Rodríguez-Tovar, Francisco J.

    2012-12-01

    Many spectral analysis techniques have been designed assuming sequences taken with a constant sampling interval. However, there are empirical time series in the geosciences (sediment cores, fossil abundance data, isotope analysis, …) that do not follow regular sampling because of missing data, gapped data, random sampling or incomplete sequences, among other reasons. In general, interpolating an uneven series in order to obtain a succession with a constant sampling interval alters the spectral content of the series. In such cases it is preferable to follow an approach that works with the uneven data directly, avoiding the need for an explicit interpolation step. The Lomb-Scargle periodogram is a popular choice in such circumstances, as there are programs available in the public domain for its computation. One new computer program for spectral analysis improves the standard Lomb-Scargle periodogram approach in two ways: (1) It explicitly adjusts the statistical significance to any bias introduced by variance reduction smoothing, and (2) it uses a permutation test to evaluate confidence levels, which is better suited than parametric methods when neighbouring frequencies are highly correlated. Another novel program for cross-spectral analysis offers the advantage of estimating the Lomb-Scargle cross-periodogram of two uneven time series defined on the same interval, and it evaluates the confidence levels of the estimated cross-spectra by a non-parametric computer intensive permutation test. Thus, the cross-spectrum, the squared coherence spectrum, the phase spectrum, and the Monte Carlo statistical significance of the cross-spectrum and the squared-coherence spectrum can be obtained. Both of the programs are written in ANSI Fortran 77, in view of its simplicity and compatibility. The program code is of public domain, provided on the website of the journal (http://www.iamg.org/index.php/publisher/articleview/frmArticleID/112/). Different examples (with simulated and real data) are described in this paper to corroborate the methodology and the implementation of these two new programs.

  1. Examination of the Relationship between TEOG Score Transition (from Basic to Secondary Education), Self-Confidence, Self-Efficacy and Motivation Level

    ERIC Educational Resources Information Center

    Usta, H. Gonca

    2017-01-01

    The relationship between individuals' academic success, motivation and self-confidence and self-efficacy levels cannot be ignored. The aim of this study is to develop and test a theoretical model considering the relationship between academic motivation, self-confidence and self-efficacy levels in transition from middle school to high school. For…

  2. An Analysis of Training Effects on School Personnel's Knowledge, Attitudes, Comfort, and Confidence Levels toward Educating Students about HIV/AIDS in Pennsylvania

    ERIC Educational Resources Information Center

    Deutschlander, Sharon

    2010-01-01

    The purpose of this study was to determine the training effects on school personnel's knowledge, attitudes, comfort, and confidence levels toward educating students about HIV/AIDS in Pennsylvania. The following four research questions were explored: (a) What is the knowledge, attitudes, confidence, and comfort levels of school personnel regarding…

  3. Current Velocity Data on Dwarf Galaxy NGC 1052-DF2 do not Constrain it to Lack Dark Matter

    NASA Astrophysics Data System (ADS)

    Martin, Nicolas F.; Collins, Michelle L. M.; Longeard, Nicolas; Tollerud, Erik

    2018-05-01

    It was recently proposed that the globular cluster system of the very low surface brightness galaxy NGC 1052-DF2 is dynamically very cold, leading to the conclusion that this dwarf galaxy has little or no dark matter. Here, we show that a robust statistical measure of the velocity dispersion of the tracer globular clusters implies a mundane velocity dispersion and a poorly constrained mass-to-light ratio. Models that include the possibility that some of the tracers are field contaminants do not yield a more constraining inference. We derive only a weak constraint on the mass-to-light ratio of the system within the half-light radius (M/{L}V< 6.7 at the 90% confidence level) or within the radius of the furthest tracer (M/{L}V< 8.1 at the 90% confidence level). This limit may imply a mass-to-light ratio on the low end for a dwarf galaxy, but many Local Group dwarf galaxies fall well within this contraint. With this study, we emphasize the need to reliably account for measurement uncertainties and to stay as close as possible to the data when determining dynamical masses from very small data sets of tracers.

  4. Assessing compatibility of direct detection data: halo-independent global likelihood analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.

    2016-10-18

    We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be comparedmore » with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a “constrained parameter goodness-of-fit” test statistic, whose p-value we then use to define a “plausibility region” (e.g. where p≥10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p<10%). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.« less

  5. Customer quality and type 2 diabetes from the patients' perspective: a cross-sectional study.

    PubMed

    Tabrizi, Jafar S; Wilson, Andrew J; O'Rourke, Peter K

    2010-12-18

    Quality in health care can be seen as having three principal dimensions: service, technical and customer quality. This study aimed to measure Customer Quality in relation to self-management of Type 2 diabetes. A cross-sectional survey of 577 Type 2 diabetes people was carried out in Australia. The 13-item Patient Activation Measure was used to evaluate Customer Quality based on self-reported knowledge, skills and confidence in four stages of self-management. All statistical analyses were conducted using SPSS 13.0. All participants achieved scores at the level of stage 1, but ten percent did not achieve score levels consistent with stage 2 and a further 16% did not reach the actual action stage. Seventy-four percent reported capacity for taking action for self-management and 38% reported the highest Customer Quality score and ability to change the action by changing health and environment. Participants with a higher education attainment, better diabetes control status and those who maintain continuity of care reported a higher Customer Quality score, reflecting higher capacity for self-management. Specific capacity building programs for health care providers and people with Type 2 diabetes are needed to increase their knowledge and skills; and improve their confidence to self-management, to achieve improved quality of delivered care and better health outcomes.

  6. Sample size determination for disease prevalence studies with partially validated data.

    PubMed

    Qiu, Shi-Fang; Poon, Wai-Yin; Tang, Man-Lai

    2016-02-01

    Disease prevalence is an important topic in medical research, and its study is based on data that are obtained by classifying subjects according to whether a disease has been contracted. Classification can be conducted with high-cost gold standard tests or low-cost screening tests, but the latter are subject to the misclassification of subjects. As a compromise between the two, many research studies use partially validated datasets in which all data points are classified by fallible tests, and some of the data points are validated in the sense that they are also classified by the completely accurate gold-standard test. In this article, we investigate the determination of sample sizes for disease prevalence studies with partially validated data. We use two approaches. The first is to find sample sizes that can achieve a pre-specified power of a statistical test at a chosen significance level, and the second is to find sample sizes that can control the width of a confidence interval with a pre-specified confidence level. Empirical studies have been conducted to demonstrate the performance of various testing procedures with the proposed sample sizes. The applicability of the proposed methods are illustrated by a real-data example. © The Author(s) 2012.

  7. Three-dimensional CT enterography using oral gastrografin in patients with small bowel obstruction: comparison with axial CT images or fluoroscopic findings.

    PubMed

    Hong, Seong Sook; Kim, Ah Young; Kwon, Seok Beom; Kim, Pyo Nyun; Lee, Moon-Gyu; Ha, Hyun Kwon

    2010-10-01

    To evaluate the feasibility of 3D CT enterography using oral gastrografin in patients with small bowel obstruction (SBO), focusing on improving diagnostic performance as compared with the use of axial CT imagings and fluoroscopic findings. For a 10-month period, 18 patients with known SBO detected clinically and radiologically were enrolled. In all patients, gastrografin was ingested prior to CT enterography. Twelve patients underwent a fluoroscopic examination. Images were randomly assessed for confidence for the level, for the cause of SBO, and for the assessment of the interpretability of each image by two gastrointestinal radiologists. The results were considered statistically significant using the Wilcoxon rank sum test. All patients (100%) well tolerated the administration of oral gastrografin. The use of 3D CT enterography significantly improved diagnostic confidence for the interpretation of the level, cause of SBO, and the assessment of the interpretability of each image as compared with the use of axial CT images (P < 0.05). 3D CT enterography was superior as compared to fluoroscopic examination (P < 0.05). The use of gastrografin for 3D CT enterography is a safe and feasible technique for precise evaluation of known or suspected SBO.

  8. Sigma: Strain-level inference of genomes from metagenomic analysis for biosurveillance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Tae-Hyuk; Chai, Juanjuan; Pan, Chongle

    Motivation: Metagenomic sequencing of clinical samples provides a promising technique for direct pathogen detection and characterization in biosurveillance. Taxonomic analysis at the strain level can be used to resolve serotypes of a pathogen in biosurveillance. Sigma was developed for strain-level identification and quantification of pathogens using their reference genomes based on metagenomic analysis. Results: Sigma provides not only accurate strain-level inferences, but also three unique capabilities: (i) Sigma quantifies the statistical uncertainty of its inferences, which includes hypothesis testing of identified genomes and confidence interval estimation of their relative abundances; (ii) Sigma enables strain variant calling by assigning metagenomic readsmore » to their most likely reference genomes; and (iii) Sigma supports parallel computing for fast analysis of large datasets. In conclusion, the algorithm performance was evaluated using simulated mock communities and fecal samples with spike-in pathogen strains. Availability and Implementation: Sigma was implemented in C++ with source codes and binaries freely available at http://sigma.omicsbio.org.« less

  9. Sigma: Strain-level inference of genomes from metagenomic analysis for biosurveillance

    DOE PAGES

    Ahn, Tae-Hyuk; Chai, Juanjuan; Pan, Chongle

    2014-09-29

    Motivation: Metagenomic sequencing of clinical samples provides a promising technique for direct pathogen detection and characterization in biosurveillance. Taxonomic analysis at the strain level can be used to resolve serotypes of a pathogen in biosurveillance. Sigma was developed for strain-level identification and quantification of pathogens using their reference genomes based on metagenomic analysis. Results: Sigma provides not only accurate strain-level inferences, but also three unique capabilities: (i) Sigma quantifies the statistical uncertainty of its inferences, which includes hypothesis testing of identified genomes and confidence interval estimation of their relative abundances; (ii) Sigma enables strain variant calling by assigning metagenomic readsmore » to their most likely reference genomes; and (iii) Sigma supports parallel computing for fast analysis of large datasets. In conclusion, the algorithm performance was evaluated using simulated mock communities and fecal samples with spike-in pathogen strains. Availability and Implementation: Sigma was implemented in C++ with source codes and binaries freely available at http://sigma.omicsbio.org.« less

  10. Self-assessed performance improves statistical fusion of image labels

    PubMed Central

    Bryan, Frederick W.; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.

    2014-01-01

    Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance. Statistical fusion resulted in statistically indistinguishable performance from self-assessed weighted voting. The authors developed a new theoretical basis for using self-assessed performance in the framework of statistical fusion and demonstrated that the combined sources of information (both statistical assessment and self-assessment) yielded statistically significant improvement over the methods considered separately. Conclusions: The authors present the first systematic characterization of self-assessed performance in manual labeling. The authors demonstrate that self-assessment and statistical fusion yield similar, but complementary, benefits for label fusion. Finally, the authors present a new theoretical basis for combining self-assessments with statistical label fusion. PMID:24593721

  11. Confidence in delegation and leadership of registered nurses in long-term-care hospitals.

    PubMed

    Yoon, Jungmin; Kim, Miyoung; Shin, Juhhyun

    2016-07-01

    Effective delegation improves job satisfaction, responsibility, productivity and development. The ageing population demands more nurses in long-term-care hospitals. Delegation and leadership promote cooperation among nursing staff. However, little research describes nursing delegation and leadership style. We investigated the relationship between registered nurses' delegation confidence and leadership in Korean long-term-care hospitals. Our descriptive correlational design sampled 199 registered nurses from 13 long-term-care hospitals in Korea. Instruments were the Confidence and Intent to Delegate Scale and Multifactor Leadership Questionnaire. Confidence in delegation significantly aligned with current-unit clinical experience, length of total clinical-nursing experience, delegation-training experience and leadership. Transformational leadership was the most statistically significant factor influencing delegation confidence. When effective delegation integrates with efficient leadership, staff can deliver optimal care to long-term-care patients. © 2016 John Wiley & Sons Ltd.

  12. Principles of Statistics: What the Sports Medicine Professional Needs to Know.

    PubMed

    Riemann, Bryan L; Lininger, Monica R

    2018-07-01

    Understanding the results and statistics reported in original research remains a large challenge for many sports medicine practitioners and, in turn, may be among one of the biggest barriers to integrating research into sports medicine practice. The purpose of this article is to provide minimal essentials a sports medicine practitioner needs to know about interpreting statistics and research results to facilitate the incorporation of the latest evidence into practice. Topics covered include the difference between statistical significance and clinical meaningfulness; effect sizes and confidence intervals; reliability statistics, including the minimal detectable difference and minimal important difference; and statistical power. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Perceived confidence to use female condoms among students in Tertiary Institutions of a Metropolitan City, Southwestern, Nigeria.

    PubMed

    Obembe, Taiwo A; Adebowale, Ayo S; Odebunmi, Kehinde O

    2017-08-11

    Latex condoms for men have been documented to offer high efficacy as both a contraceptive and protection against sexually transmitted diseases. This equally establishes the importance of continued research on female condoms. This study aims to investigate the perceived confidence to use the female condoms amongst undergraduate female students from selected tertiary institutions from Ibadan Southwestern Nigeria. The study was a descriptive cross-sectional survey involving 388 female undergraduate students selected through a multistage sampling technique. The survey was carried using pre-tested semi-structured questionnaires. Quantitative data were analyzed using the Statistical Package for the Social Sciences to generate frequencies, cross tabulations of variables at 5% level of significance. Mean age of respondents 18.26 ± 3.45 with most students being 20-24 years (55.2%), single (92.8%), Yorubas (85.6%) and from the polytechnic institutions (41.0%). Only 10.8% had good perceived confidence to use a female condom. Perceived confidence was significantly higher amongst other ethnicities (19.59 ± 3.827) compared to Yoruba ethnicity (18.04 ± 3.337) (F = 9.935; p < 0.05). Likewise, students from the Polytechnic campuses exhibited significantly higher mean scores (18.81 ± 3.187) compared to others (F = 3.724; p < 0.05). Perception towards the condom was a significant factor that influenced the confidence to use a female condom (F = 9.896; p < 0.000). Concerted efforts are advocated to improve the low perception exhibited towards the use of female condoms and the low perceived confidence to its utilization. This would help to transfer the decision making and control to women thus contributing to their empowerment and increased protection from unplanned pregnancies and sexually transmitted diseases.

  14. Math anxiety and exposure to statistics in messages about genetically modified foods: effects of numeracy, math self-efficacy, and form of presentation.

    PubMed

    Silk, Kami J; Parrott, Roxanne L

    2014-01-01

    Health risks are often communicated to the lay public in statistical formats even though low math skills, or innumeracy, have been found to be prevalent among lay individuals. Although numeracy has been a topic of much research investigation, the role of math self-efficacy and math anxiety on health and risk communication processing has received scant attention from health communication researchers. To advance theoretical and applied understanding regarding health message processing, the authors consider the role of math anxiety, including the effects of math self-efficacy, numeracy, and form of presenting statistics on math anxiety, and the potential effects for comprehension, yielding, and behavioral intentions. The authors also examine math anxiety in a health risk context through an evaluation of the effects of exposure to a message about genetically modified foods on levels of math anxiety. Participants (N = 323) were randomly assigned to read a message that varied the presentation of statistical evidence about potential risks associated with genetically modified foods. Findings reveal that exposure increased levels of math anxiety, with increases in math anxiety limiting yielding. Moreover, math anxiety impaired comprehension but was mediated by perceivers' math confidence and skills. Last, math anxiety facilitated behavioral intentions. Participants who received a text-based message with percentages were more likely to yield than participants who received either a bar graph with percentages or a combined form. Implications are discussed as they relate to math competence and its role in processing health and risk messages.

  15. Quantitative comparison of tympanic membrane displacements using two optical methods to recover the optical phase

    NASA Astrophysics Data System (ADS)

    Santiago-Lona, Cynthia V.; Hernández-Montes, María del Socorro; Mendoza-Santoyo, Fernando; Esquivel-Tejeda, Jesús

    2018-02-01

    The study and quantification of the tympanic membrane (TM) displacements add important information to advance the knowledge about the hearing process. A comparative statistical analysis between two commonly used demodulation methods employed to recover the optical phase in digital holographic interferometry, namely the fast Fourier transform and phase-shifting interferometry, is presented as applied to study thin tissues such as the TM. The resulting experimental TM surface displacement data are used to contrast both methods through the analysis of variance and F tests. Data are gathered when the TMs are excited with continuous sound stimuli at levels 86, 89 and 93 dB SPL for the frequencies of 800, 1300 and 2500 Hz under the same experimental conditions. The statistical analysis shows repeatability in z-direction displacements with a standard deviation of 0.086, 0.098 and 0.080 μm using the Fourier method, and 0.080, 0.104 and 0.055 μm with the phase-shifting method at a 95% confidence level for all frequencies. The precision and accuracy are evaluated by means of the coefficient of variation; the results with the Fourier method are 0.06143, 0.06125, 0.06154 and 0.06154, 0.06118, 0.06111 with phase-shifting. The relative error between both methods is 7.143, 6.250 and 30.769%. On comparing the measured displacements, the results indicate that there is no statistically significant difference between both methods for frequencies at 800 and 1300 Hz; however, errors and other statistics increase at 2500 Hz.

  16. The retail availability of tobacco in Tasmania: evidence for a socio-economic and geographical gradient.

    PubMed

    Melody, Shannon M; Martin-Gall, Veronica; Harding, Ben; Veitch, Mark Gk

    2018-03-19

    To describe the retail availability of tobacco and to examine the association between tobacco outlet density and area-level remoteness and socio-economic status classification in Tasmania. Ecological cross-sectional study; analysis of tobacco retail outlet data collected by the Department of Health and Human Services (Tasmania) according to area-level (Statistical Areas Level 2) remoteness (defined by the Remoteness Structure of the Australian Statistical Geographical Standard) and socio-economic status (defined by the 2011 Australian Bureau of Statistics Index of Relative Socioeconomic Advantage and Disadvantage). Tobacco retail outlet density per 1000 residents. On 31 December 2016, there were 1.54 tobacco retail outlets per 1000 persons. The density of outlets was 79% greater in suburbs or towns in outer regional, remote and very remote Tasmania than in inner regional Tasmania (rate ratio [RR], 1.79; 95% confidence Interval [CI], 1.29-2.50; P < 0.001). Suburbs or towns in Tasmania with the greatest socio-economic disadvantage had more than twice the number of tobacco outlets per 1000 people as areas of least disadvantage (RR, 2.30; 95% CI, 1.32-4.21; P = 0.014). A disproportionate concentration of tobacco retail outlets in regional and remote Tasmania and in areas of lowest socio-economic status is evident. Our findings are consistent with those of analyses in New South Wales and Western Australia. Progressive tobacco retail restrictions have been proposed as the next frontier in tobacco control. However, the intended and unintended consequences of such policies need to be investigated, particularly for socio-economically deprived and rural areas.

  17. Global Positioning System III (GPS III)

    DTIC Science & Technology

    2015-12-01

    Vacuum (TVAC) testing on October 12, 2015, and successfully completed baseline TVAC testing on December 23, 2015 – a major system- level event...0.0 0.0 Total 4142.9 5285.2 N/A 5180.4 4269.8 5650.1 5557.4 Current APB Cost Estimate Reference SCP dated July 02, 2015 Confidence Level Confidence... Level of cost estimate for current APB: 60% The current APB is established at the 60% confidence level . This estimate is built upon the February 2015

  18. Study of Montmorillonite Clay for the Removal of Copper (II) by Adsorption: Full Factorial Design Approach and Cascade Forward Neural Network

    PubMed Central

    Turan, Nurdan Gamze; Ozgonenel, Okan

    2013-01-01

    An intensive study has been made of the removal efficiency of Cu(II) from industrial leachate by biosorption of montmorillonite. A 24 factorial design and cascade forward neural network (CFNN) were used to display the significant levels of the analyzed factors on the removal efficiency. The obtained model based on 24 factorial design was statistically tested using the well-known methods. The statistical analysis proves that the main effects of analyzed parameters were significant by an obtained linear model within a 95% confidence interval. The proposed CFNN model requires less experimental data and minimum calculations. Moreover, it is found to be cost-effective due to inherent advantages of its network structure. Optimization of the levels of the analyzed factors was achieved by minimizing adsorbent dosage and contact time, which were costly, and maximizing Cu(II) removal efficiency. The suggested optimum conditions are initial pH at 6, adsorbent dosage at 10 mg/L, and contact time at 10 min using raw montmorillonite with the Cu(II) removal of 80.7%. At the optimum values, removal efficiency was increased to 88.91% if the modified montmorillonite was used. PMID:24453833

  19. Measurement of the local food environment: a comparison of existing data sources.

    PubMed

    Bader, Michael D M; Ailshire, Jennifer A; Morenoff, Jeffrey D; House, James S

    2010-03-01

    Studying the relation between the residential environment and health requires valid, reliable, and cost-effective methods to collect data on residential environments. This 2002 study compared the level of agreement between measures of the presence of neighborhood businesses drawn from 2 common sources of data used for research on the built environment and health: listings of businesses from commercial databases and direct observations of city blocks by raters. Kappa statistics were calculated for 6 types of businesses-drugstores, liquor stores, bars, convenience stores, restaurants, and grocers-located on 1,663 city blocks in Chicago, Illinois. Logistic regressions estimated whether disagreement between measurement methods was systematically correlated with the socioeconomic and demographic characteristics of neighborhoods. Levels of agreement between the 2 sources were relatively high, with significant (P < 0.001) kappa statistics for each business type ranging from 0.32 to 0.70. Most business types were more likely to be reported by direct observations than in the commercial database listings. Disagreement between the 2 sources was not significantly correlated with the socioeconomic and demographic characteristics of neighborhoods. Results suggest that researchers should have reasonable confidence using whichever method (or combination of methods) is most cost-effective and theoretically appropriate for their research design.

  20. Estimation of distributional parameters for censored trace level water quality data: 2. Verification and applications

    USGS Publications Warehouse

    Helsel, Dennis R.; Gilliom, Robert J.

    1986-01-01

    Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.

  1. An Investigation of the Variety and Complexity of Statistical Methods Used in Current Internal Medicine Literature.

    PubMed

    Narayanan, Roshni; Nugent, Rebecca; Nugent, Kenneth

    2015-10-01

    Accreditation Council for Graduate Medical Education guidelines require internal medicine residents to develop skills in the interpretation of medical literature and to understand the principles of research. A necessary component is the ability to understand the statistical methods used and their results, material that is not an in-depth focus of most medical school curricula and residency programs. Given the breadth and depth of the current medical literature and an increasing emphasis on complex, sophisticated statistical analyses, the statistical foundation and education necessary for residents are uncertain. We reviewed the statistical methods and terms used in 49 articles discussed at the journal club in the Department of Internal Medicine residency program at Texas Tech University between January 1, 2013 and June 30, 2013. We collected information on the study type and on the statistical methods used for summarizing and comparing samples, determining the relations between independent variables and dependent variables, and estimating models. We then identified the typical statistics education level at which each term or method is learned. A total of 14 articles came from the Journal of the American Medical Association Internal Medicine, 11 from the New England Journal of Medicine, 6 from the Annals of Internal Medicine, 5 from the Journal of the American Medical Association, and 13 from other journals. Twenty reported randomized controlled trials. Summary statistics included mean values (39 articles), category counts (38), and medians (28). Group comparisons were based on t tests (14 articles), χ2 tests (21), and nonparametric ranking tests (10). The relations between dependent and independent variables were analyzed with simple regression (6 articles), multivariate regression (11), and logistic regression (8). Nine studies reported odds ratios with 95% confidence intervals, and seven analyzed test performance using sensitivity and specificity calculations. These papers used 128 statistical terms and context-defined concepts, including some from data analysis (56), epidemiology-biostatistics (31), modeling (24), data collection (12), and meta-analysis (5). Ten different software programs were used in these articles. Based on usual undergraduate and graduate statistics curricula, 64.3% of the concepts and methods used in these papers required at least a master's degree-level statistics education. The interpretation of the current medical literature can require an extensive background in statistical methods at an education level exceeding the material and resources provided to most medical students and residents. Given the complexity and time pressure of medical education, these deficiencies will be hard to correct, but this project can serve as a basis for developing a curriculum in study design and statistical methods needed by physicians-in-training.

  2. Relationship between self-confidence and sex role identity among managerial women and men.

    PubMed

    Chusmir, L H; Koberg, C S

    1991-12-01

    The self-confidence and sex role identities of 437 American female and male managers were examined by using three subscales of the Adjective Check List. Results showed that, contrary to stereotypes and older research, female and male managers were strikingly similar. Women and men with cross-sex role identities showed lower levels of self-confidence than those did with androgynous orientations; high self-confidence was linked with masculine and androgynous orientations. The managers were not significantly different in self-confidence when demographic variables and sex role identity were held constant. Sex role identity (but not gender) was a major factor in the level of self-confidence.

  3. Visualization of the significance of Receiver Operating Characteristics based on confidence ellipses

    NASA Astrophysics Data System (ADS)

    Sarlis, Nicholas V.; Christopoulos, Stavros-Richard G.

    2014-03-01

    The Receiver Operating Characteristics (ROC) is used for the evaluation of prediction methods in various disciplines like meteorology, geophysics, complex system physics, medicine etc. The estimation of the significance of a binary prediction method, however, remains a cumbersome task and is usually done by repeating the calculations by Monte Carlo. The FORTRAN code provided here simplifies this problem by evaluating the significance of binary predictions for a family of ellipses which are based on confidence ellipses and cover the whole ROC space. Catalogue identifier: AERY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERY_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 11511 No. of bytes in distributed program, including test data, etc.: 72906 Distribution format: tar.gz Programming language: FORTRAN. Computer: Any computer supporting a GNU FORTRAN compiler. Operating system: Linux, MacOS, Windows. RAM: 1Mbyte Classification: 4.13, 9, 14. Nature of problem: The Receiver Operating Characteristics (ROC) is used for the evaluation of prediction methods in various disciplines like meteorology, geophysics, complex system physics, medicine etc. The estimation of the significance of a binary prediction method, however, remains a cumbersome task and is usually done by repeating the calculations by Monte Carlo. The FORTRAN code provided here simplifies this problem by evaluating the significance of binary predictions for a family of ellipses which are based on confidence ellipses and cover the whole ROC space. Solution method: Using the statistics of random binary predictions for a given value of the predictor threshold ɛt, one can construct the corresponding confidence ellipses. The envelope of these corresponding confidence ellipses is estimated when ɛt varies from 0 to 1. This way a new family of ellipses is obtained, named k-ellipses, which covers the whole ROC plane and leads to a well defined Area Under the Curve (AUC). For the latter quantity, Mason and Graham [1] have shown that it follows the Mann-Whitney U-statistics [2] which can be applied [3] for the estimation of the statistical significance of each k-ellipse. As the transformation is invertible, any point on the ROC plane corresponds to a unique value of k, thus to a unique p-value to obtain this point by chance. The present FORTRAN code provides this p-value field on the ROC plane as well as the k-ellipses corresponding to the (p=)10%, 5% and 1% significance levels using as input the number of the positive (P) and negative (Q) cases to be predicted. Unusual features: In some machines, the compiler directive -O2 or -O3 should be used to avoid NaN’s in some points of the p-field along the diagonal. Running time: Depending on the application, e.g., 4s for an Intel(R) Core(TM)2 CPU E7600 at 3.06 GHz with 2 GB RAM for the examples presented here References: [1] S.J. Mason, N.E. Graham, Quart. J. Roy. Meteor. Soc. 128 (2002) 2145. [2] H.B. Mann, D.R. Whitney, Ann. Math. Statist. 18 (1947) 50. [3] L.C. Dinneen, B.C. Blakesley, J. Roy. Stat. Soc. Ser. C Appl. Stat. 22 (1973) 269.

  4. Cross-cultural adaptation and validation of the Injury-Psychological Readiness to Return to Sport scale to Persian language.

    PubMed

    Naghdi, Soofia; Nakhostin Ansari, Noureddin; Farhadi, Yasaman; Ebadi, Safoora; Entezary, Ebrahim; Glazer, Douglas

    2016-10-01

    The aim of the present study was to develop and provide validation statistics for the Persian Injury-Psychological Readiness to Return to Sport scale (I-PRRS) following a cross-sectional and prospective cohort study design. The I-PRRS was forward/back-translated and culturally adapted into Persian language. The Persian I-PRRS was administered to 100 injured athletes (93 male; age 26.0 ± 5.6 years; time since injury 4.84 ± 6.4 months) and 50 healthy athletes (36 male; mean age 25.7 ± 6.0 years). The Persian I-PRRS was re-administered to 50 injured athletes at 1 week to examine test-retest reliability. There were no floor or ceiling effects confirming the content validity of Persian I-PRRS. The internal consistency reliability was good. Excellent test-retest reliability and agreement were demonstrated. The statistically significant difference in Persian I-PRRS total scores between the injured athletes and healthy athletes provides an evidence of discriminative validity. The Persian I-PRRS total scores were positively correlated with the Farsi Mood Scale (FARMS) total scores, showing construct validity. The principal component analysis indicated a two-factor solution consisting of "Confidence to play" and "Confidence in the injured body part and skill level". The Persian I-PRRS showed excellent reliability and validity and can be used to assess injured athletes' psychological readiness to return to sport among Persian-speaking populations.

  5. Self-confidence of anglers in identification of freshwater sport fish

    USGS Publications Warehouse

    Chizinski, C.J.; Martin, D. R.; Pope, Kevin L.

    2014-01-01

    Although several studies have focused on how well anglers identify species using replicas and pictures, there has been no study assessing the confidence that can be placed in angler's ability to identify recreationally important fish. Understanding factors associated with low self-confidence will be useful in tailoring education programmes to improve self-confidence in identifying common species. The purposes of this assessment were to quantify the confidence of recreational anglers to identify 13 commonly encountered warm water fish species and to relate self-confidence to species availability and angler experience. Significant variation was observed in anglers self-confidence among species and levels of self-declared skill, with greater confidence associated with greater skill and with greater exposure. This study of angler self-confidence strongly highlights the need for educational programmes that target lower skilled anglers and the importance of teaching all anglers about less common species, regardless of skill level.

  6. Disconnections Between Teacher Expectations and Student Confidence in Bioethics

    NASA Astrophysics Data System (ADS)

    Hanegan, Nikki L.; Price, Laura; Peterson, Jeremy

    2008-09-01

    This study examines how student practice of scientific argumentation using socioscientific bioethics issues affects both teacher expectations of students’ general performance and student confidence in their own work. When teachers use bioethical issues in the classroom students can gain not only biology content knowledge but also important decision-making skills. Learning bioethics through scientific argumentation gives students opportunities to express their ideas, formulate educated opinions and value others’ viewpoints. Research has shown that science teachers’ expectations of student success and knowledge directly influence student achievement and confidence levels. Our study analyzes pre-course and post-course surveys completed by students enrolled in a university level bioethics course ( n = 111) and by faculty in the College of Biology and Agriculture faculty ( n = 34) based on their perceptions of student confidence. Additionally, student data were collected from classroom observations and interviews. Data analysis showed a disconnect between faculty and students perceptions of confidence for both knowledge and the use of science argumentation. Student reports of their confidence levels regarding various bioethical issues were higher than faculty reports. A further disconnect showed up between students’ preferred learning styles and the general faculty’s common teaching methods; students learned more by practicing scientific argumentation than listening to traditional lectures. Students who completed a bioethics course that included practice in scientific argumentation, significantly increased their confidence levels. This study suggests that professors’ expectations and teaching styles influence student confidence levels in both knowledge and scientific argumentation.

  7. Statistical model for forecasting monthly large wildfire events in western United States

    Treesearch

    Haiganoush K. Preisler; Anthony L. Westerling

    2006-01-01

    The ability to forecast the number and location of large wildfire events (with specified confidence bounds) is important to fire managers attempting to allocate and distribute suppression efforts during severe fire seasons. This paper describes the development of a statistical model for assessing the forecasting skills of fire-danger predictors and producing 1-month-...

  8. On the Log-Normality of Historical Magnetic-Storm Intensity Statistics: Implications for Extreme-Event Probabilities

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.

    2015-12-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.

  9. Health care seeking patterns and determinants of out-of-pocket expenditure for Malaria for the children under-five in Uganda

    PubMed Central

    2013-01-01

    Background The objectives of this study were to assess the patterns of treatment seeking behaviour for children under five with malaria; and to examine the statistical relationship between out-of-pocket expenditure (OOP) on malaria treatment for under-fives and source of treatment, place of residence, education and wealth characteristics of Uganda households. OOP expenditure on health care is now a development concern due to its negative effect on households’ ability to finance consumption of other basic needs. Methods The 2009 Uganda Malaria Indicator Survey was the source of data on treatment seeking behaviour for under-five children with malaria, and patterns and levels of OOP expenditure for malaria treatment. Binomial logit and Log-lin regression models were estimated. In logit model the dependent variable was a dummy (1=incurred some OOP, 0=none incurred) and independent variables were wealth quintiles, rural versus urban, place of treatment, education level, sub-region, and normal duty disruption. The dependent variable in Log-lin model was natural logarithm of OOP and the independent variables were the same as mentioned above. Results Five key descriptive analysis findings emerge. First, malaria is quite prevalent at 44.7% among children below the age of five. Second, a significant proportion seeks treatment (81.8%). Third, private providers are the preferred option for the under-fives for the treatment of malaria. Fourth, the majority pay about 70.9% for either consultation, medicines, transport or hospitalization but the biggest percent of those who pay, do so for medicines (54.0%). Fifth, hospitalization is the most expensive at an average expenditure of US$7.6 per child, even though only 2.9% of those that seek treatment are hospitalized. The binomial logit model slope coefficients for the variables richest wealth quintile, Private facility as first source of treatment, and sub-regions Central 2, East central, Mid-eastern, Mid-western, and Normal duties disrupted were positive and statistically significant at 99% level of confidence. On the other hand, the Log-lin model slope coefficients for Traditional healer, Sought treatment from one source, Primary educational level, North East, Mid Northern and West Nile variables had a negative sign and were statistically significant at 95% level of confidence. Conclusion The fact that OOP expenditure is still prevalent and private provider is the preferred choice, increasing public provision may not be the sole answer. Plans to improve malaria treatment should explicitly incorporate efforts to protect households from high OOP expenditures. This calls for provision of subsidies to enable the private sector to reduce prices, regulation of prices of malaria medicines, and reduction/removal of import duties on such medicines. PMID:23721217

  10. Aircraft Maneuvers for the Evaluation of Flying Qualities and Agility. Volume 1. Maneuver Development Process and Initial Maneuver Set

    DTIC Science & Technology

    1993-08-01

    subtitled "Simulation Data," consists of detailed infonrnation on the design parmneter variations tested, subsequent statistical analyses conducted...used with confidence during the design process. The data quality can be examined in various forms such as statistical analyses of measure of merit data...merit, such as time to capture or nmaximurn pitch rate, can be calculated from the simulation time history data. Statistical techniques are then used

  11. The dynamics of learning about a climate threshold

    NASA Astrophysics Data System (ADS)

    Keller, Klaus; McInerney, David

    2008-02-01

    Anthropogenic greenhouse gas emissions may trigger threshold responses of the climate system. One relevant example of such a potential threshold response is a shutdown of the North Atlantic meridional overturning circulation (MOC). Numerous studies have analyzed the problem of early MOC change detection (i.e., detection before the forcing has committed the system to a threshold response). Here we analyze the early MOC prediction problem. To this end, we virtually deploy an MOC observation system into a simple model that mimics potential future MOC responses and analyze the timing of confident detection and prediction. Our analysis suggests that a confident prediction of a potential threshold response can require century time scales, considerably longer that the time required for confident detection. The signal enabling early prediction of an approaching MOC threshold in our model study is associated with the rate at which the MOC intensity decreases for a given forcing. A faster MOC weakening implies a higher MOC sensitivity to forcing. An MOC sensitivity exceeding a critical level results in a threshold response. Determining whether an observed MOC trend in our model differs in a statistically significant way from an unforced scenario (the detection problem) imposes lower requirements on an observation system than the determination whether the MOC will shut down in the future (the prediction problem). As a result, the virtual observation systems designed in our model for early detection of MOC changes might well fail at the task of early and confident prediction. Transferring this conclusion to the real world requires a considerably refined MOC model, as well as a more complete consideration of relevant observational constraints.

  12. Exact nonparametric confidence bands for the survivor function.

    PubMed

    Matthews, David

    2013-10-12

    A method to produce exact simultaneous confidence bands for the empirical cumulative distribution function that was first described by Owen, and subsequently corrected by Jager and Wellner, is the starting point for deriving exact nonparametric confidence bands for the survivor function of any positive random variable. We invert a nonparametric likelihood test of uniformity, constructed from the Kaplan-Meier estimator of the survivor function, to obtain simultaneous lower and upper bands for the function of interest with specified global confidence level. The method involves calculating a null distribution and associated critical value for each observed sample configuration. However, Noe recursions and the Van Wijngaarden-Decker-Brent root-finding algorithm provide the necessary tools for efficient computation of these exact bounds. Various aspects of the effect of right censoring on these exact bands are investigated, using as illustrations two observational studies of survival experience among non-Hodgkin's lymphoma patients and a much larger group of subjects with advanced lung cancer enrolled in trials within the North Central Cancer Treatment Group. Monte Carlo simulations confirm the merits of the proposed method of deriving simultaneous interval estimates of the survivor function across the entire range of the observed sample. This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. It was begun while the author was visiting the Department of Statistics, University of Auckland, and completed during a subsequent sojourn at the Medical Research Council Biostatistics Unit in Cambridge. The support of both institutions, in addition to that of NSERC and the University of Waterloo, is greatly appreciated.

  13. The effect of personality traits and psychosocial training on burnout syndrome among healthcare students.

    PubMed

    Skodova, Zuzana; Lajciakova, Petra

    2013-11-01

    The aims of this paper were to explore the influence of personality factors on student burnout syndrome and to explore the effect of psychosocial training on burnout and personality predictors among university students in health care professions. A quasi-experimental pre-test/post-test design was used to evaluate the effect of psychosocial training. A sample of 111 university students were divided into experimental and control groups (average age 20.7 years, SD=2.8 years; 86.1% females). The School Burnout Inventory (SBI), Sense of Coherence (SOC) questionnaire, and Rosenberg's Self-esteem scale were employed. Linear regression and analysis of variance were applied for statistical analysis. The results show that socio-psychological training had a positive impact on the level of burnout and on personality factors that are related to burnout. After completing the training, the level of burnout in the experimental group significantly decreased (95% confidence interval: 0.93, 9.25), and no significant change was observed in the control group. Furthermore, respondents' sense of coherence increased in the experimental group (95% confidence interval: -9.11, 2.64), but there were no significant changes in respondents' self-esteem levels in either group. Psychosocial training positively influenced burnout among students in health care professions. Because the coping strategies that were used during the study are similar to effective work coping strategies, psychosocial training can be considered to be an effective tool to prevent burnout in the helping professions. Copyright © 2013. Published by Elsevier Ltd.

  14. Comparison of Mental Toughness and Power Test Performances in High-Level Kickboxers by Competitive Success.

    PubMed

    Slimani, Maamer; Miarka, Bianca; Briki, Walid; Cheour, Foued

    2016-06-01

    Kickboxing is a high-intensity intermittent striking combat sport, which is characterized by complex skills and tactical key actions with short duration. The present study compared and verified the relationship between mental toughness (MT), countermovement jump (CMJ) and medicine ball throw (MBT) power tests by outcomes of high-level kickboxers during National Championship. Thirty two high-level male kickboxers (winner = 16 and loser = 16: 21.2 ± 3.1 years, 1.73 ± 0.07 m, and 70.2 ± 9.4 kg) were analyzed using the CMJ, MBT tests and sports mental toughness questionnaire (SMTQ; based in confidence, constancy and control subscales), before the fights of the 2015 national championship (16 bouts). In statistical analysis, Mann-Withney test and a multiple linear regression were used to compare groups and to observe relationships, respectively, P ≤ 0.05. The present results showed significant differences between losers vs. winners, respectively, of total MT (7(7;8) vs. 11(10.2;11), confidence (3(3;3) vs. 4(4;4)), constancy (2(2;2) vs. 3(3;3)), control (2(2;3) vs. 4(4;4)) subscales and MBT (4.1(4;4.3) vs. 4.6(4.4;4.8)). The multiple linear regression showed a strong associations between MT results and outcome (r = 0.89), MBT (r = 0.84) and CMJ (r = 0.73). The findings suggest that MT will be more predictive of performance in those sports and in the outcome of competition.

  15. Evaluation of 3M™ Molecular Detection Assay (MDA) Listeria for the Detection of Listeria species in Selected Foods and Environmental Surfaces: Collaborative Study, First Action 2014.06.

    PubMed

    Bird, Patrick; Flannery, Jonathan; Crowley, Erin; Agin, James; Goins, David; Monteroso, Lisa; Benesh, DeAnn

    2015-01-01

    The 3M™ Molecular Detection Assay (MDA) Listeria is used with the 3M Molecular Detection System for the detection of Listeria species in food, food-related, and environmental samples after enrichment. The assay utilizes loop-mediated isothermal amplification to rapidly amplify Listeria target DNA with high specificity and sensitivity, combined with bioluminescence to detect the amplification. The 3M MDA Listeria method was evaluated using an unpaired study design in a multilaboratory collaborative study and compared to the AOAC Official Method of AnalysisSM (OMA) 993.12 Listeria monocytogenes in Milk and Dairy Products reference method for the detection of Listeria species in full-fat (4% milk fat) cottage cheese (25 g test portions). A total of 15 laboratories located in the continental United States and Canada participated. Each matrix had three inoculation levels: an uninoculated control level (0 CFU/test portion), and two levels artificially contaminated with Listeria monocytogenes, a low inoculum level (0.2-2 CFU/test portion) and a high inoculum level (2-5 CFU/test portion) using nonheat-stressed cells. In total, 792 unpaired replicate portions were analyzed. Statistical analysis was conducted according to the probability of detection (POD) model. Results obtained for the low inoculum level test portions produced a difference in cross-laboratory POD value of -0.07 with a 95% confidence interval of (-0.19, 0.06). No statistically significant differences were observed in the number of positive samples detected by the 3M MDA Listeria method versus the AOAC OMA method.

  16. Detection of calcification clusters in digital breast tomosynthesis slices at different dose levels utilizing a SRSAR reconstruction and JAFROC

    NASA Astrophysics Data System (ADS)

    Timberg, P.; Dustler, M.; Petersson, H.; Tingberg, A.; Zackrisson, S.

    2015-03-01

    Purpose: To investigate detection performance for calcification clusters in reconstructed digital breast tomosynthesis (DBT) slices at different dose levels using a Super Resolution and Statistical Artifact Reduction (SRSAR) reconstruction method. Method: Simulated calcifications with irregular profile (0.2 mm diameter) where combined to form clusters that were added to projection images (1-3 per abnormal image) acquired on a DBT system (Mammomat Inspiration, Siemens). The projection images were dose reduced by software to form 35 abnormal cases and 25 normal cases as if acquired at 100%, 75% and 50% dose level (AGD of approximately 1.6 mGy for a 53 mm standard breast, measured according to EUREF v0.15). A standard FBP and a SRSAR reconstruction method (utilizing IRIS (iterative reconstruction filters), and outlier detection using Maximum-Intensity Projections and Average-Intensity Projections) were used to reconstruct single central slices to be used in a Free-response task (60 images per observer and dose level). Six observers participated and their task was to detect the clusters and assign confidence rating in randomly presented images from the whole image set (balanced by dose level). Each trial was separated by one weeks to reduce possible memory bias. The outcome was analyzed for statistical differences using Jackknifed Alternative Free-response Receiver Operating Characteristics. Results: The results indicate that it is possible reduce the dose by 50% with SRSAR without jeopardizing cluster detection. Conclusions: The detection performance for clusters can be maintained at a lower dose level by using SRSAR reconstruction.

  17. Comparing the Use of 3D Photogrammetry and Computed Tomography in Assessing the Severity of Single-Suture Nonsyndromic Craniosynostosis

    PubMed Central

    Ho, Olivia A.; Saber, Nikoo; Stephens, Derek; Clausen, April; Drake, James; Forrest, Christopher

    2017-01-01

    Purpose: Single-suture nonsyndromic craniosynostosis is diagnosed using clinical assessment and computed tomography (CT). With increasing awareness of the associated risks of radiation exposure, the use of CT is particularly concerning in patients with craniosynostosis since they are exposed at a younger age and more frequently than the average child. Three-dimensional (3D) photogrammetry is advantageous—it involves no radiation, is conveniently obtainable within clinic, and does not require general anaesthesia. This study aims to assess how 3D photogrammetry compares to CT in the assessment of craniosynostosis severity, to quantify surgical outcomes, and analyze the validity of 3D photogrammetry in craniosynostosis. Methods: Computed tomography images and 3D photographs of patients who underwent craniosynostosis surgery were assessed and aligned to best fit. The intervening area between the CT and 3D photogrammetry curves at the supraorbital bar (bandeau) level in axial view was calculated. Statistical analysis was performed using Student t test. Ninety-five percent confidence intervals were determined and equivalence margins were applied. Results: In total, 41 pairs of CTs and 3D photographs were analyzed. The 95% confidence interval was 198.16 to 264.18 mm2 and the mean was 231.17 mm2. When comparisons were made in the same bandeau region omitting the temporalis muscle, the 95% confidence interval was 108.94 to 147.38 mm2, and the mean was 128.16 mm2. Although statistically significant difference between the modalities was found, they can be attributable to the dampening effect of soft tissue. Conclusion: Within certain error margins, 3D photogrammetry is comparable to CT in assessing the severity of single-suture nonsyndromic craniosynostosis. However, a dampening effect can be attributable to the soft tissue. Three-dimensional photogrammetry may be more applicable for severe cases of craniosynostosis but not milder deformity. It may also be beneficial for assessing the overall appearance and aesthetics but not for determining underlying bony severity. PMID:29026817

  18. Comparing the Use of 3D Photogrammetry and Computed Tomography in Assessing the Severity of Single-Suture Nonsyndromic Craniosynostosis.

    PubMed

    Ho, Olivia A; Saber, Nikoo; Stephens, Derek; Clausen, April; Drake, James; Forrest, Christopher; Phillips, John

    2017-05-01

    Single-suture nonsyndromic craniosynostosis is diagnosed using clinical assessment and computed tomography (CT). With increasing awareness of the associated risks of radiation exposure, the use of CT is particularly concerning in patients with craniosynostosis since they are exposed at a younger age and more frequently than the average child. Three-dimensional (3D) photogrammetry is advantageous-it involves no radiation, is conveniently obtainable within clinic, and does not require general anaesthesia. This study aims to assess how 3D photogrammetry compares to CT in the assessment of craniosynostosis severity, to quantify surgical outcomes, and analyze the validity of 3D photogrammetry in craniosynostosis. Computed tomography images and 3D photographs of patients who underwent craniosynostosis surgery were assessed and aligned to best fit. The intervening area between the CT and 3D photogrammetry curves at the supraorbital bar (bandeau) level in axial view was calculated. Statistical analysis was performed using Student t test. Ninety-five percent confidence intervals were determined and equivalence margins were applied. In total, 41 pairs of CTs and 3D photographs were analyzed. The 95% confidence interval was 198.16 to 264.18 mm 2 and the mean was 231.17 mm 2 . When comparisons were made in the same bandeau region omitting the temporalis muscle, the 95% confidence interval was 108.94 to 147.38 mm 2 , and the mean was 128.16 mm 2 . Although statistically significant difference between the modalities was found, they can be attributable to the dampening effect of soft tissue. Within certain error margins, 3D photogrammetry is comparable to CT in assessing the severity of single-suture nonsyndromic craniosynostosis. However, a dampening effect can be attributable to the soft tissue. Three-dimensional photogrammetry may be more applicable for severe cases of craniosynostosis but not milder deformity. It may also be beneficial for assessing the overall appearance and aesthetics but not for determining underlying bony severity.

  19. Neutral vs positive oral contrast in diagnosing acute appendicitis with contrast-enhanced CT: sensitivity, specificity, reader confidence and interpretation time

    PubMed Central

    Naeger, D M; Chang, S D; Kolli, P; Shah, V; Huang, W; Thoeni, R F

    2011-01-01

    Objective The study compared the sensitivity, specificity, confidence and interpretation time of readers of differing experience in diagnosing acute appendicitis with contrast-enhanced CT using neutral vs positive oral contrast agents. Methods Contrast-enhanced CT for right lower quadrant or right flank pain was performed in 200 patients with neutral and 200 with positive oral contrast including 199 with proven acute appendicitis and 201 with other diagnoses. Test set disease prevalence was 50%. Two experienced gastrointestinal radiologists, one fellow and two first-year residents blindly assessed all studies for appendicitis (2000 readings) and assigned confidence scores (1=poor to 4=excellent). Receiver operating characteristic (ROC) curves were generated. Total interpretation time was recorded. Each reader's interpretation with the two agents was compared using standard statistical methods. Results Average reader sensitivity was found to be 96% (range 91–99%) with positive and 95% (89–98%) with neutral oral contrast; specificity was 96% (92–98%) and 94% (90–97%). For each reader, no statistically significant difference was found between the two agents (sensitivities p-values >0.6; specificities p-values>0.08), in the area under the ROC curve (range 0.95–0.99) or in average interpretation times. In cases without appendicitis, positive oral contrast demonstrated improved appendix identification (average 90% vs 78%) and higher confidence scores for three readers. Average interpretation times showed no statistically significant differences between the agents. Conclusion Neutral vs positive oral contrast does not affect the accuracy of contrast-enhanced CT for diagnosing acute appendicitis. Although positive oral contrast might help to identify normal appendices, we continue to use neutral oral contrast given its other potential benefits. PMID:20959365

  20. Using real-time ultrasound imaging as adjunct teaching tools to enhance physical therapist students' ability and confidence to perform traction of the knee joint.

    PubMed

    Markowski, Alycia; Watkins, Maureen K; Burnett, Todd; Ho, Melissa; Ling, Michael

    2018-04-01

    Often, physical therapy students struggle with the skill and the confidence to perform manual techniques for musculoskeletal examination. Current teaching methods lack concurrent objective feedback. Real-time ultrasound imaging (RTUI) has the advantage of generating visualization of anatomical structures in real-time in an efficient and safe manner. We hypothesize that the use of RTUI to augment teaching with concurrent objective visual feedback will result in students' improved ability to create a change in joint space when performing a manual knee traction and higher confidence scores. Eighty-six students were randomly allocated to a control or an experimental group. All participants received baseline instructions on how to perform knee traction. The control group received standardized lab instruction (visual, video, and instructor/partner feedback). The experimental group received standardized lab instruction augmented with RTUI feedback. Pre-data and post-data collection consisted of measuring participants' ability to create changes in joint space when performing knee traction, a confidence survey evaluating perceived ability and a reflection paper. Joint space changes between groups were compared using a paired t-test. Surveys were analyzed with descriptive statistics and compared using Wilcoxon Rank Sum and for the reflection papers, themes were identified and descriptive statistics reported. Although there were no statistically significant differences between the control and the experimental group, overall scores improved. Qualitative data suggests students found the use of ultrasound imaging beneficial and would like more exposure. This novel approach to teaching knee traction with RTUI has potential and may be a basis for further studies. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Uncertainty Quantification and Statistical Convergence Guidelines for PIV Data

    NASA Astrophysics Data System (ADS)

    Stegmeir, Matthew; Kassen, Dan

    2016-11-01

    As Particle Image Velocimetry has continued to mature, it has developed into a robust and flexible technique for velocimetry used by expert and non-expert users. While historical estimates of PIV accuracy have typically relied heavily on "rules of thumb" and analysis of idealized synthetic images, recently increased emphasis has been placed on better quantifying real-world PIV measurement uncertainty. Multiple techniques have been developed to provide per-vector instantaneous uncertainty estimates for PIV measurements. Often real-world experimental conditions introduce complications in collecting "optimal" data, and the effect of these conditions is important to consider when planning an experimental campaign. The current work utilizes the results of PIV Uncertainty Quantification techniques to develop a framework for PIV users to utilize estimated PIV confidence intervals to compute reliable data convergence criteria for optimal sampling of flow statistics. Results are compared using experimental and synthetic data, and recommended guidelines and procedures leveraging estimated PIV confidence intervals for efficient sampling for converged statistics are provided.

  2. A Content-Adaptive Analysis and Representation Framework for Audio Event Discovery from "Unscripted" Multimedia

    NASA Astrophysics Data System (ADS)

    Radhakrishnan, Regunathan; Divakaran, Ajay; Xiong, Ziyou; Otsuka, Isao

    2006-12-01

    We propose a content-adaptive analysis and representation framework to discover events using audio features from "unscripted" multimedia such as sports and surveillance for summarization. The proposed analysis framework performs an inlier/outlier-based temporal segmentation of the content. It is motivated by the observation that "interesting" events in unscripted multimedia occur sparsely in a background of usual or "uninteresting" events. We treat the sequence of low/mid-level features extracted from the audio as a time series and identify subsequences that are outliers. The outlier detection is based on eigenvector analysis of the affinity matrix constructed from statistical models estimated from the subsequences of the time series. We define the confidence measure on each of the detected outliers as the probability that it is an outlier. Then, we establish a relationship between the parameters of the proposed framework and the confidence measure. Furthermore, we use the confidence measure to rank the detected outliers in terms of their departures from the background process. Our experimental results with sequences of low- and mid-level audio features extracted from sports video show that "highlight" events can be extracted effectively as outliers from a background process using the proposed framework. We proceed to show the effectiveness of the proposed framework in bringing out suspicious events from surveillance videos without any a priori knowledge. We show that such temporal segmentation into background and outliers, along with the ranking based on the departure from the background, can be used to generate content summaries of any desired length. Finally, we also show that the proposed framework can be used to systematically select "key audio classes" that are indicative of events of interest in the chosen domain.

  3. Near-peer medical student simulation training.

    PubMed

    Cash, Thomas; Brand, Eleanor; Wong, Emma; Richardson, Jay; Athorn, Sam; Chowdhury, Faiza

    2017-06-01

    There is growing concern that medical students are inadequately prepared for life as a junior doctor. A lack of confidence managing acutely unwell patients is often cited as a barrier to good clinical care. With medical schools investing heavily in simulation equipment, we set out to explore if near-peer simulation training is an effective teaching format. Medical students in their third year of study and above were invited to attend a 90-minute simulation teaching session. The sessions were designed and delivered by final-year medical students using clinical scenarios mapped to the Sheffield MBChB curriculum. Candidates were required to assess, investigate and manage an acutely unwell simulated patient. Pre- and post-simulation training Likert scale questionnaires were completed relating to self-reported confidence levels. There is growing concern that medical students are inadequately prepared for life as a junior doctor RESULTS: Questionnaires were completed by 25 students (100% response rate); 52 per cent of students had no prior simulation experience. There were statistically significant improvements in self-reported confidence levels in each of the six areas assessed (p < 0.005). Thematic analysis of free-text comments indicated that candidates enjoyed the practical format of the sessions and found the experience useful. Our results suggest that near-peer medical student simulation training benefits both teacher and learner and that this simplistic model could easily be replicated at other medical schools. As the most junior members of the team, medical students are often confined to observer status. Simulation empowers students to practise independently in a safe and protected environment. Furthermore, it may help to alleviate anxiety about starting work as a junior doctor and improve future patient care. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  4. Intensive skills week for military medical students increases technical proficiency, confidence, and skills to minimize negative stress.

    PubMed

    Mueller, Genevieve; Hunt, Bonnie; Wall, Van; Rush, Robert; Molof, Alan; Schoeff, Jonathan; Wedmore, Ian; Schmid, James; Laporta, Anthony

    2012-01-01

    The effects of stress induced cortisol on learning and memory is well documented in the literature.1-3 Memory and learning are enhanced at low levels while high levels are detrimental. Repetitive training in stressful situations enables management of the stress response4 as demonstrated by the high intensity training military members undergo to prepare for tactical situations. Appropriate management of one?s stress response is critical in the medical field, as the negative effects of stress can potentially hinder life-saving procedures and treatments. This also applies to physicians-in-training as they learn and practice triage, emergency medicine, and surgical skills prior to graduation. Rocky Vista University?s Military Medicine Honor?s Track (MMHT) held a week long high-intensity emergency medicine and surgical Intensive Skills Week (ISW), facilitated by military and university physicians, to advance students? skills and maximize training using the Human Worn Partial Surgical Task Simulator (Cut Suit). The short-term goal of the ISW was to overcome negative stress responses to increase confidence, technical and non-technical knowledge, and skill in surgery and emergency medicine in an effort to improve performance as third-year medical students. The long-term goal was to enhance performance and proficiency in residency and future medical practice. The metrics for the short-term goals were the focus of this pilot study. Results show an increase in confidence and decrease in perceived stress as well as statistically significant improvements in technical and non-technical skills and surgical instrumentation knowledge throughout the week. There is a correlative benefit to physician and non-physician military personnel, especially Special Operations Forces (SOF) medical personnel, from developing and implementing similar training programs when live tissue or cadaver models are unavailable or unfeasible. 2012.

  5. Community-level Sports Group Participation and Older Individuals' Depressive Symptoms.

    PubMed

    Tsuji, Taishi; Miyaguni, Yasuhiro; Kanamori, Satoru; Hanazato, Masamichi; Kondo, Katsunori

    2018-06-01

    Community-level group participation is a structural aspect of social capital that may have a contextual influence on an individual's health. Herein, we sought to investigate a contextual relationship between community-level prevalence of sports group participation and depressive symptoms in older individuals. We used data from the 2010 Japan Gerontological Evaluation Study, a population-based, cross-sectional study of individuals 65 yr or older without long-term care needs in Japan. Overall, 74,681 participants in 516 communities were analyzed. Depressive symptoms were diagnosed as a 15-item Geriatric Depression Scale score of ≥5. Participation in a sports group 1 d·month or more often was defined as "participation." For this study, we applied two-level multilevel Poisson regression analysis stratified by sex, calculated prevalence ratios (PR), and 95% confidence intervals (CI). Overall, 17,420 individuals (23.3%) had depressive symptoms, and 16,915 (22.6%) participated in a sports group. Higher prevalence of community-level sports group participation had a statistically significant relationship with a lower likelihood of depressive symptoms (male: PR, 0.89 (95% CI, 0.85-0.92); female: PR, 0.96 (95% CI, 0.92-0.99), estimated by 10% of participation proportion) after adjusting for individual-level sports group participation, age, diseases, family form, alcohol, smoking, education, equivalent income, and population density. We found statistically significant cross-level interaction terms in male participants only (PR, 0.86; 95% CI, 0.77-0.95). We found a contextual preventive relationship between community-level sports group participation and depressive symptoms in older individuals. Therefore, promoting sports groups in a community may be effective as a population-based strategy for the prevention of depression in older individuals. Furthermore, the benefit may favor male sports group participants.

  6. MICROLENSING OF QUASAR BROAD EMISSION LINES: CONSTRAINTS ON BROAD LINE REGION SIZE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerras, E.; Mediavilla, E.; Jimenez-Vicente, J.

    2013-02-20

    We measure the differential microlensing of the broad emission lines between 18 quasar image pairs in 16 gravitational lenses. We find that the broad emission lines are in general weakly microlensed. The results show, at a modest level of confidence (1.8{sigma}), that high ionization lines such as C IV are more strongly microlensed than low ionization lines such as H{beta}, indicating that the high ionization line emission regions are more compact. If we statistically model the distribution of microlensing magnifications, we obtain estimates for the broad line region size of r{sub s} = 24{sup +22} {sub -15} and r{sub s}more » = 55{sup +150} {sub -35} lt-day (90% confidence) for the high and low ionization lines, respectively. When the samples are divided into higher and lower luminosity quasars, we find that the line emission regions of more luminous quasars are larger, with a slope consistent with the expected scaling from photoionization models. Our estimates also agree well with the results from local reveberation mapping studies.« less

  7. Confidence in the safety of standard childhood vaccinations among New Zealand health professionals.

    PubMed

    Lee, Carol; Duck, Isabelle; Sibley, Chris G

    2018-05-04

    To investigate the level of confidence in the safety of standard childhood vaccinations among health professionals in New Zealand. Data from the 2013/14 New Zealand Attitudes and Values Study (NZAVS) was used to investigate the level of agreement that "it is safe to vaccinate children following the standard New Zealand immunisation schedule" among different classes of health professionals (N=1,032). Most health professionals showed higher levels of vaccine confidence, with 96.7% of those describing their occupation as GP or simply 'doctor' (GPs/doctor) and 90.7% of pharmacists expressing strong vaccine confidence. However, there were important disparities between some other classes of health professionals, with only 65.1% of midwives and 13.6% of practitioners of alternative medicine expressing high vaccine confidence. As health professionals are a highly trusted source of vaccine information, communicating the consensus of belief among GPs/doctors that vaccines are safe may help provide reassurance for parents who ask about vaccine safety. However, the lower level of vaccine confidence among midwives is a matter of concern that may have negative influence on parental perceptions of vaccinations.

  8. Brain fingerprinting field studies comparing P300-MERMER and P300 brainwave responses in the detection of concealed information.

    PubMed

    Farwell, Lawrence A; Richardson, Drew C; Richardson, Graham M

    2013-08-01

    Brain fingerprinting detects concealed information stored in the brain by measuring brainwave responses. We compared P300 and P300-MERMER event-related brain potentials for error rate/accuracy and statistical confidence in four field/real-life studies. 76 tests detected presence or absence of information regarding (1) real-life events including felony crimes; (2) real crimes with substantial consequences (either a judicial outcome, i.e., evidence admitted in court, or a $100,000 reward for beating the test); (3) knowledge unique to FBI agents; and (4) knowledge unique to explosives (EOD/IED) experts. With both P300 and P300-MERMER, error rate was 0 %: determinations were 100 % accurate, no false negatives or false positives; also no indeterminates. Countermeasures had no effect. Median statistical confidence for determinations was 99.9 % with P300-MERMER and 99.6 % with P300. Brain fingerprinting methods and scientific standards for laboratory and field applications are discussed. Major differences in methods that produce different results are identified. Markedly different methods in other studies have produced over 10 times higher error rates and markedly lower statistical confidences than those of these, our previous studies, and independent replications. Data support the hypothesis that accuracy, reliability, and validity depend on following the brain fingerprinting scientific standards outlined herein.

  9. Thermocouple Calibration and Accuracy in a Materials Testing Laboratory

    NASA Technical Reports Server (NTRS)

    Lerch, B. A.; Nathal, M. V.; Keller, D. J.

    2002-01-01

    A consolidation of information has been provided that can be used to define procedures for enhancing and maintaining accuracy in temperature measurements in materials testing laboratories. These studies were restricted to type R and K thermocouples (TCs) tested in air. Thermocouple accuracies, as influenced by calibration methods, thermocouple stability, and manufacturer's tolerances were all quantified in terms of statistical confidence intervals. By calibrating specific TCs the benefits in accuracy can be as great as 6 C or 5X better compared to relying on manufacturer's tolerances. The results emphasize strict reliance on the defined testing protocol and on the need to establish recalibration frequencies in order to maintain these levels of accuracy.

  10. Effect of plasma arc welding variables on fusion zone grain size and hardness of AISI 321 austenitic stainless steel

    NASA Astrophysics Data System (ADS)

    Kondapalli, S. P.

    2017-12-01

    In the present work, pulsed current microplasma arc welding is carried out on AISI 321 austenitic stainless steel of 0.3 mm thickness. Peak current, Base current, Pulse rate and Pulse width are chosen as the input variables, whereas grain size and hardness are considered as output responses. Response surface method is adopted by using Box-Behnken Design, and in total 27 experiments are performed. Empirical relation between input and output response is developed using statistical software and analysis of variance (ANOVA) at 95% confidence level to check the adequacy. The main effect and interaction effect of input variables on output response are also studied.

  11. Searches for anomalous coupling in top-quark interaction with the W boson and b quark, along with searches for quark-flavor-changing neutral currents, in an analysis of data from the CMS experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boos, E. E.; Bunichev, V. E.; Vorotnikov, G. A.

    2016-01-15

    The results of searches for effects beyond the Standard Model in processes of single top-quark production in the CMS experiment are presented. Anomalous contributions of the vector and magnetic types in top-quark interaction with the W boson and b quark and quark-flavor-changing neutral currents in top-quark interaction with the c or u quark via gluon exchange were studied. The respective analysis was performed with the aid of Bayesian neural networks. No statistically significant deviations were found, and upper limits on anomalous couplings at a 95% confidence level were set.

  12. Statistical inference for tumor growth inhibition T/C ratio.

    PubMed

    Wu, Jianrong

    2010-09-01

    The tumor growth inhibition T/C ratio is commonly used to quantify treatment effects in drug screening tumor xenograft experiments. The T/C ratio is converted to an antitumor activity rating using an arbitrary cutoff point and often without any formal statistical inference. Here, we applied a nonparametric bootstrap method and a small sample likelihood ratio statistic to make a statistical inference of the T/C ratio, including both hypothesis testing and a confidence interval estimate. Furthermore, sample size and power are also discussed for statistical design of tumor xenograft experiments. Tumor xenograft data from an actual experiment were analyzed to illustrate the application.

  13. Statistical considerations in the development of injury risk functions.

    PubMed

    McMurry, Timothy L; Poplin, Gerald S

    2015-01-01

    We address 4 frequently misunderstood and important statistical ideas in the construction of injury risk functions. These include the similarities of survival analysis and logistic regression, the correct scale on which to construct pointwise confidence intervals for injury risk, the ability to discern which form of injury risk function is optimal, and the handling of repeated tests on the same subject. The statistical models are explored through simulation and examination of the underlying mathematics. We provide recommendations for the statistically valid construction and correct interpretation of single-predictor injury risk functions. This article aims to provide useful and understandable statistical guidance to improve the practice in constructing injury risk functions.

  14. After p Values: The New Statistics for Undergraduate Neuroscience Education.

    PubMed

    Calin-Jageman, Robert J

    2017-01-01

    Statistical inference is a methodological cornerstone for neuroscience education. For many years this has meant inculcating neuroscience majors into null hypothesis significance testing with p values. There is increasing concern, however, about the pervasive misuse of p values. It is time to start planning statistics curricula for neuroscience majors that replaces or de-emphasizes p values. One promising alternative approach is what Cumming has dubbed the "New Statistics", an approach that emphasizes effect sizes, confidence intervals, meta-analysis, and open science. I give an example of the New Statistics in action and describe some of the key benefits of adopting this approach in neuroscience education.

  15. Effect of tulle on the mechanical properties of a maxillofacial silicone elastomer.

    PubMed

    Gunay, Yumushan; Kurtoglu, Cem; Atay, Arzu; Karayazgan, Banu; Gurbuz, Cihan Cem

    2008-11-01

    The purpose of this research was to investigate if physical properties could be improved by incorporating a tulle reinforcement material into a maxillofacial silicone elastomer. A-2186 silicone elastomer was used in this study. The study group consisted of 20 elastomer specimens incorporated with tulle and fabricated in dumbbell-shaped silicone patterns using ASTM D412 and D624 standards. The control group consisted of 20 elastomer specimens fabricated without tulle. Tensile strength, ultimate elongation, and tear strength of all specimens were measured and analyzed. Statistical analyses were performed using Mann-Whitney U test with a statistical significance at 95% confidence level. It was found that the tensile and tear strengths of tulle-incorporated maxillofacial silicone elastomer were higher than those without tulle incorporation (p < 0.05). Therefore, findings of this study suggested that tulle successfully reinforced a maxillofacial silicone elastomer by providing it with better mechanical properties and augmented strength--especially for the delicate edges of maxillofacial prostheses.

  16. Reliability of dietary information from surrogate respondents.

    PubMed

    Hislop, T G; Coldman, A J; Zheng, Y Y; Ng, V T; Labo, T

    1992-01-01

    A self-administered food frequency questionnaire was included as part of a case-control study of breast cancer in 1980-82. In 1986-87, a second food frequency questionnaire was sent to surviving cases and husbands of deceased cases; 30 spouses (86% response rate) and 263 surviving cases (88% response rate) returned questionnaires. The dietary questions concerned consumption of specific food items by the case before diagnosis of breast cancer. Missing values were less common in the second questionnaire; there was no significant difference in missing values between surviving cases and spouses of deceased cases. Kappa statistics comparing responses in the first and second questionnaires were significantly lower for spouses of deceased cases than for surviving cases. Reported level of confidence by the husbands regarding knowledge about their wives' eating habits did not influence the kappa statistics or the frequencies of missing values. The lack of good agreement has important implications for the use of proxy interviews from husbands in retrospective dietary studies.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marleau, Peter; Reyna, David

    In this work we investigate a method that confirms the operability of neutron detectors requiring neither radiological sources nor radiation-generating devices. This is desirable when radiological sources are not available, but confidence in the functionality of the instrument is required. The “source”, based on the production of neutrons in high-Z materials by muons, provides a tagged, low-background and consistent rate of neutrons that can be used to check the functionality of or calibrate a detector. Using a Monte Carlo guided optimization, an experimental apparatus was designed and built to evaluate the feasibility of this technique. Through a series of trialmore » measurements in a variety of locations we show that gated muon-induced neutrons appear to provide a consistent source of neutrons (35.9 ± 2.3 measured neutrons/10,000 muons in the instrument) under normal environmental variability (less than one statistical standard deviation for 10,000 muons) with a combined environmental + statistical uncertainty of ~18% for 10,000 muons. This is achieved in a single 21-22 minute measurement at sea level.« less

  18. Damage detection of engine bladed-disks using multivariate statistical analysis

    NASA Astrophysics Data System (ADS)

    Fang, X.; Tang, J.

    2006-03-01

    The timely detection of damage in aero-engine bladed-disks is an extremely important and challenging research topic. Bladed-disks have high modal density and, particularly, their vibration responses are subject to significant uncertainties due to manufacturing tolerance (blade-to-blade difference or mistuning), operating condition change and sensor noise. In this study, we present a new methodology for the on-line damage detection of engine bladed-disks using their vibratory responses during spin-up or spin-down operations which can be measured by blade-tip-timing sensing technique. We apply a principle component analysis (PCA)-based approach for data compression, feature extraction, and denoising. The non-model based damage detection is achieved by analyzing the change between response features of the healthy structure and of the damaged one. We facilitate such comparison by incorporating the Hotelling's statistic T2 analysis, which yields damage declaration with a given confidence level. The effectiveness of the method is demonstrated by case studies.

  19. Remote Minehunting System (RMS)

    DTIC Science & Technology

    2015-12-01

    1449.4 1449.4 744.6 Confidence Level Confidence Level of cost estimate for current APB: 50% The Independent Cost Estimate to support the RMS Nunn...which the Derpartment has been successful. It is difficult to calculate mathematically the precise confidence levels associated with life-cycle cost...Baseline (TY $M) Initial PAUC Production Estimate Changes PAUC Development Estimate Econ Qty Sch Eng Est Oth Spt Total 12.957 -0.752 3.262 2.950 0.454

  20. Management of orthodontic emergencies in primary care - self-reported confidence of general dental practitioners.

    PubMed

    Popat, H; Thomas, K; Farnell, D J J

    2016-07-08

    Objective To determine general dental practitioners' (GDPs) confidence in managing orthodontic emergencies.Design Cross-sectional study.Setting Primary dental care.Subjects and methods An online survey was distributed to dentists practicing in Wales. The survey collected basic demographic information and included descriptions of ten common orthodontic emergency scenarios.Main outcome measure Respondents' self-reported confidence in managing the orthodontic emergency scenarios on a 5-point Likert scale. Differences between the Likert responses and the demographic variables were investigated using chi-squared tests.Results The median number of orthodontic emergencies encountered by respondents over the previous six months was 1. Overall, the self-reported confidence of respondents was high with 7 of the 10 scenarios presented scoring a median of 4 indicating that GDPs were 'confident' in their management. Statistical analysis revealed that GDPs who saw more orthodontic emergencies in the previous six months were more confident when managing the presented scenarios. Other variables such as age, gender, geographic location of practice and number of years practising dentistry were not associated with self-reported confidence.Conclusions Despite GDPs encountering very few orthodontic emergencies in primary care, they appear to be confident in dealing with commonly arising orthodontic emergency situations.

Top