Sample records for standard statistical interpretation

  1. Thinking with Data

    ERIC Educational Resources Information Center

    Smith, Amy; Molinaro, Marco; Lee, Alisa; Guzman-Alvarez, Alberto

    2014-01-01

    For students to be successful in STEM, they need "statistical literacy," the ability to interpret, evaluate, and communicate statistical information (Gal 2002). The science and engineering practices dimension of the "Next Generation Science Standards" ("NGSS") highlights these skills, emphasizing the importance of…

  2. What Should We Grow in Our School Garden to Sell at the Farmers' Market? Initiating Statistical Literacy through Science and Mathematics Integration

    ERIC Educational Resources Information Center

    Selmer, Sarah J.; Rye, James A.; Malone, Elizabeth; Fernandez, Danielle; Trebino, Kathryn

    2014-01-01

    Statistical literacy is essential to scientific literacy, and the quest for such is best initiated in the elementary grades. The "Next Generation Science Standards and the Common Core State Standards for Mathematics" set forth practices (e.g., asking questions, using tools strategically to analyze and interpret data) and content (e.g.,…

  3. Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.

    PubMed

    Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas

    2016-11-14

    Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.

  4. Multimedia Presentations in Educational Measurement and Statistics: Design Considerations and Instructional Approaches

    ERIC Educational Resources Information Center

    Sklar, Jeffrey C.; Zwick, Rebecca

    2009-01-01

    Proper interpretation of standardized test scores is a crucial skill for K-12 teachers and school personnel; however, many do not have sufficient knowledge of measurement concepts to appropriately interpret and communicate test results. In a recent four-year project funded by the National Science Foundation, three web-based instructional…

  5. Statistics Refresher for Molecular Imaging Technologists, Part 2: Accuracy of Interpretation, Significance, and Variance.

    PubMed

    Farrell, Mary Beth

    2018-06-01

    This article is the second part of a continuing education series reviewing basic statistics that nuclear medicine and molecular imaging technologists should understand. In this article, the statistics for evaluating interpretation accuracy, significance, and variance are discussed. Throughout the article, actual statistics are pulled from the published literature. We begin by explaining 2 methods for quantifying interpretive accuracy: interreader and intrareader reliability. Agreement among readers can be expressed simply as a percentage. However, the Cohen κ-statistic is a more robust measure of agreement that accounts for chance. The higher the κ-statistic is, the higher is the agreement between readers. When 3 or more readers are being compared, the Fleiss κ-statistic is used. Significance testing determines whether the difference between 2 conditions or interventions is meaningful. Statistical significance is usually expressed using a number called a probability ( P ) value. Calculation of P value is beyond the scope of this review. However, knowing how to interpret P values is important for understanding the scientific literature. Generally, a P value of less than 0.05 is considered significant and indicates that the results of the experiment are due to more than just chance. Variance, standard deviation (SD), confidence interval, and standard error (SE) explain the dispersion of data around a mean of a sample drawn from a population. SD is commonly reported in the literature. A small SD indicates that there is not much variation in the sample data. Many biologic measurements fall into what is referred to as a normal distribution taking the shape of a bell curve. In a normal distribution, 68% of the data will fall within 1 SD, 95% will fall within 2 SDs, and 99.7% will fall within 3 SDs. Confidence interval defines the range of possible values within which the population parameter is likely to lie and gives an idea of the precision of the statistic being measured. A wide confidence interval indicates that if the experiment were repeated multiple times on other samples, the measured statistic would lie within a wide range of possibilities. The confidence interval relies on the SE. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

  6. An Overview of Interrater Agreement on Likert Scales for Researchers and Practitioners

    PubMed Central

    O'Neill, Thomas A.

    2017-01-01

    Applications of interrater agreement (IRA) statistics for Likert scales are plentiful in research and practice. IRA may be implicated in job analysis, performance appraisal, panel interviews, and any other approach to gathering systematic observations. Any rating system involving subject-matter experts can also benefit from IRA as a measure of consensus. Further, IRA is fundamental to aggregation in multilevel research, which is becoming increasingly common in order to address nesting. Although, several technical descriptions of a few specific IRA statistics exist, this paper aims to provide a tractable orientation to common IRA indices to support application. The introductory overview is written with the intent of facilitating contrasts among IRA statistics by critically reviewing equations, interpretations, strengths, and weaknesses. Statistics considered include rwg, rwg*, r′wg, rwg(p), average deviation (AD), awg, standard deviation (Swg), and the coefficient of variation (CVwg). Equations support quick calculation and contrasting of different agreement indices. The article also includes a “quick reference” table and three figures in order to help readers identify how IRA statistics differ and how interpretations of IRA will depend strongly on the statistic employed. A brief consideration of recommended practices involving statistical and practical cutoff standards is presented, and conclusions are offered in light of the current literature. PMID:28553257

  7. On the statistical significance of excess events: Remarks of caution and the need for a standard method of calculation

    NASA Technical Reports Server (NTRS)

    Staubert, R.

    1985-01-01

    Methods for calculating the statistical significance of excess events and the interpretation of the formally derived values are discussed. It is argued that a simple formula for a conservative estimate should generally be used in order to provide a common understanding of quoted values.

  8. 40 CFR Appendix N to Part 50 - Interpretation of the National Ambient Air Quality Standards for PM2.5

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... midnight to midnight (local standard time) that are used in NAAQS computations. Designated monitors are... accordance with part 58 of this chapter. Design values are the metrics (i.e., statistics) that are compared... (referred to as the “annual standard design value”). If spatial averaging has been approved by EPA for a...

  9. Design, analysis, and interpretation of field quality-control data for water-sampling projects

    USGS Publications Warehouse

    Mueller, David K.; Schertz, Terry L.; Martin, Jeffrey D.; Sandstrom, Mark W.

    2015-01-01

    The report provides extensive information about statistical methods used to analyze quality-control data in order to estimate potential bias and variability in environmental data. These methods include construction of confidence intervals on various statistical measures, such as the mean, percentiles and percentages, and standard deviation. The methods are used to compare quality-control results with the larger set of environmental data in order to determine whether the effects of bias and variability might interfere with interpretation of these data. Examples from published reports are presented to illustrate how the methods are applied, how bias and variability are reported, and how the interpretation of environmental data can be qualified based on the quality-control analysis.

  10. Standardized Effect Sizes for Moderated Conditional Fixed Effects with Continuous Moderator Variables

    PubMed Central

    Bodner, Todd E.

    2017-01-01

    Wilkinson and Task Force on Statistical Inference (1999) recommended that researchers include information on the practical magnitude of effects (e.g., using standardized effect sizes) to distinguish between the statistical and practical significance of research results. To date, however, researchers have not widely incorporated this recommendation into the interpretation and communication of the conditional effects and differences in conditional effects underlying statistical interactions involving a continuous moderator variable where at least one of the involved variables has an arbitrary metric. This article presents a descriptive approach to investigate two-way statistical interactions involving continuous moderator variables where the conditional effects underlying these interactions are expressed in standardized effect size metrics (i.e., standardized mean differences and semi-partial correlations). This approach permits researchers to evaluate and communicate the practical magnitude of particular conditional effects and differences in conditional effects using conventional and proposed guidelines, respectively, for the standardized effect size and therefore provides the researcher important supplementary information lacking under current approaches. The utility of this approach is demonstrated with two real data examples and important assumptions underlying the standardization process are highlighted. PMID:28484404

  11. Measuring Equity: Creating a New Standard for Inputs and Outputs

    ERIC Educational Resources Information Center

    Knoeppel, Robert C.; Della Sala, Matthew R.

    2013-01-01

    The purpose of this article is to introduce a new statistic to capture the ratio of equitable student outcomes given equitable inputs. Given the fact that finance structures should be aligned to outcome standards according to judicial interpretation, a ratio of outputs to inputs, or "equity ratio," is introduced to discern if conclusions can be…

  12. 40 CFR Appendix K to Part 50 - Interpretation of the National Ambient Air Quality Standards for Particulate Matter

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., other techniques, such as the use of statistical models or the use of historical data could be..., mathematical techniques should be applied to account for the trends to ensure that the expected annual values... emission patterns, either the most recent representative year(s) could be used or statistical techniques or...

  13. DNA Damage and Genetic Instability as Harbingers of Prostate Cancer

    DTIC Science & Technology

    2013-01-01

    incidence of prostate cancer as compared to placebo. Primary analysis of this trial indicated no statistically significant effect of selenium...Identification, isolation, staining, processing, and statistical analysis of slides for ERG and PTEN markers (aim 1) and interpretation of these results...participating in this study being conducted under Investigational New Drug #29829 from the Food and Drug Administration. STANDARD TREATMENT Patients

  14. Statistical geometric affinity in human brain electric activity

    NASA Astrophysics Data System (ADS)

    Chornet-Lurbe, A.; Oteo, J. A.; Ros, J.

    2007-05-01

    The representation of the human electroencephalogram (EEG) records by neurophysiologists demands standardized time-amplitude scales for their correct conventional interpretation. In a suite of graphical experiments involving scaling affine transformations we have been able to convert electroencephalogram samples corresponding to any particular sleep phase and relaxed wakefulness into each other. We propound a statistical explanation for that finding in terms of data collapse. As a sequel, we determine characteristic time and amplitude scales and outline a possible physical interpretation. An analysis for characteristic times based on lacunarity is also carried out as well as a study of the synchrony between left and right EEG channels.

  15. 40 CFR Appendix T to Part 50 - Interpretation of the Primary National Ambient Air Quality Standards for Oxides of Sulfur (Sulfur...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-hour SO2 concentration values measured from midnight to midnight (local standard time) that are used in NAAQS computations. Design values are the metrics (i.e., statistics) that are compared to the NAAQS levels to determine compliance, calculated as specified in section 5 of this appendix. The design value...

  16. 40 CFR Appendix T to Part 50 - Interpretation of the Primary National Ambient Air Quality Standards for Oxides of Sulfur (Sulfur...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-hour SO2 concentration values measured from midnight to midnight (local standard time) that are used in NAAQS computations. Design values are the metrics (i.e., statistics) that are compared to the NAAQS levels to determine compliance, calculated as specified in section 5 of this appendix. The design value...

  17. 40 CFR Appendix T to Part 50 - Interpretation of the Primary National Ambient Air Quality Standards for Oxides of Sulfur (Sulfur...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-hour SO2 concentration values measured from midnight to midnight (local standard time) that are used in NAAQS computations. Design values are the metrics (i.e., statistics) that are compared to the NAAQS levels to determine compliance, calculated as specified in section 5 of this appendix. The design value...

  18. 40 CFR Appendix T to Part 50 - Interpretation of the Primary National Ambient Air Quality Standards for Oxides of Sulfur (Sulfur...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-hour SO2 concentration values measured from midnight to midnight (local standard time) that are used in NAAQS computations. Design values are the metrics (i.e., statistics) that are compared to the NAAQS levels to determine compliance, calculated as specified in section 5 of this appendix. The design value...

  19. 40 CFR Appendix T to Part 50 - Interpretation of the Primary National Ambient Air Quality Standards for Oxides of Sulfur (Sulfur...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-hour SO2 concentration values measured from midnight to midnight (local standard time) that are used in NAAQS computations. Design values are the metrics (i.e., statistics) that are compared to the NAAQS levels to determine compliance, calculated as specified in section 5 of this appendix. The design value...

  20. Misinterpretation of statistical distance in security of quantum key distribution shown by simulation

    NASA Astrophysics Data System (ADS)

    Iwakoshi, Takehisa; Hirota, Osamu

    2014-10-01

    This study will test an interpretation in quantum key distribution (QKD) that trace distance between the distributed quantum state and the ideal mixed state is a maximum failure probability of the protocol. Around 2004, this interpretation was proposed and standardized to satisfy both of the key uniformity in the context of universal composability and operational meaning of the failure probability of the key extraction. However, this proposal has not been verified concretely yet for many years while H. P. Yuen and O. Hirota have thrown doubt on this interpretation since 2009. To ascertain this interpretation, a physical random number generator was employed to evaluate key uniformity in QKD. In this way, we calculated statistical distance which correspond to trace distance in quantum theory after a quantum measurement is done, then we compared it with the failure probability whether universal composability was obtained. As a result, the degree of statistical distance of the probability distribution of the physical random numbers and the ideal uniformity was very large. It is also explained why trace distance is not suitable to guarantee the security in QKD from the view point of quantum binary decision theory.

  1. A comparison of five serological tests for bovine brucellosis.

    PubMed Central

    Dohoo, I R; Wright, P F; Ruckerbauer, G M; Samagh, B S; Robertson, F J; Forbes, L B

    1986-01-01

    Five serological assays: the buffered plate antigen test, the standard tube agglutination test, the complement fixation test, the hemolysis-in-gel test and the indirect enzyme immunoassay were diagnostically evaluated. Test data consisted of results from 1208 cattle in brucellosis-free herds, 1578 cattle in reactor herds of unknown infection status and 174 cattle from which Brucella abortus had been cultured. The complement fixation test had the highest specificity in both nonvaccinated and vaccinated cattle. The indirect enzyme immunoassay, if interpreted at a high threshold, also exhibited a high specificity in both groups of cattle. The hemolysis-in-gel test had a very high specificity when used in nonvaccinated cattle but quite a low specificity among vaccinates. With the exception of the complement fixation test, all tests had high sensitivities if interpreted at the minimum threshold. However, the sensitivities of the standard tube agglutination test and indirect enzyme immunoassay, when interpreted at high thresholds were comparable to that of the complement fixation test. A kappa statistic was used to measure the agreement between the various tests. In general the kappa statistics were quite low, suggesting that the various tests may detect different antibody isotypes. There was however, good agreement between the buffered plate antigen test and standard tube agglutination test (the two agglutination tests evaluated) and between the complement fixation test and the indirect enzyme immunoassay when interpreted at a high threshold. With the exception of the buffered plate antigen test, all tests were evaluated as confirmatory tests by estimating their specificity and sensitivity on screening-test positive samples.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:3539295

  2. Quantifying, displaying and accounting for heterogeneity in the meta-analysis of RCTs using standard and generalised Q statistics

    PubMed Central

    2011-01-01

    Background Clinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis. Heterogeneity is usually assessed via the well known Q and I2 statistics, along with the random effects estimate they imply. In recent years, alternative methods for quantifying heterogeneity have been proposed, that are based on a 'generalised' Q statistic. Methods We review 18 IPD meta-analyses of RCTs into treatments for cancer, in order to quantify the amount of heterogeneity present and also to discuss practical methods for explaining heterogeneity. Results Differing results were obtained when the standard Q and I2 statistics were used to test for the presence of heterogeneity. The two meta-analyses with the largest amount of heterogeneity were investigated further, and on inspection the straightforward application of a random effects model was not deemed appropriate. Compared to the standard Q statistic, the generalised Q statistic provided a more accurate platform for estimating the amount of heterogeneity in the 18 meta-analyses. Conclusions Explaining heterogeneity via the pre-specification of trial subgroups, graphical diagnostic tools and sensitivity analyses produced a more desirable outcome than an automatic application of the random effects model. Generalised Q statistic methods for quantifying and adjusting for heterogeneity should be incorporated as standard into statistical software. Software is provided to help achieve this aim. PMID:21473747

  3. The challenges for scientific publishing, 60 years on.

    PubMed

    Hausmann, Laura; Murphy, Sean P

    2016-10-01

    The most obvious difference in science publishing between 'then' and 'now' is the dramatic change in the communication of data and in their interpretation. The democratization of science via the Internet has brought not only benefits but also challenges to publishing including fraudulent behavior and plagiarism, data and statistics reporting standards, authorship confirmation and other issues which affect authors, readers, and publishers in different ways. The wide accessibility of data on a global scale permits acquisition and meta-analysis to mine for novel synergies, and has created a highly commercialized environment. As we illustrate here, identifying unacceptable practices leads to changes in the standards for data reporting. In the past decades, science publishing underwent dramatic changes in the communication of data and in their interpretation, in the increasing pressure and commercialization, and the democratization of science on a global scale via the Internet. This article reviews the benefits and challenges to publishing including fraudulent behavior and plagiarism, data and statistics reporting standards, authorship confirmation and other issues, with the aim to provide readers with practical examples and hands-on guidelines. As we illustrate here, identifying unacceptable practices leads to changes in the standards for data reporting. This article is part of the 60th Anniversary special issue. © 2016 International Society for Neurochemistry.

  4. A Bayesian Missing Data Framework for Generalized Multiple Outcome Mixed Treatment Comparisons

    ERIC Educational Resources Information Center

    Hong, Hwanhee; Chu, Haitao; Zhang, Jing; Carlin, Bradley P.

    2016-01-01

    Bayesian statistical approaches to mixed treatment comparisons (MTCs) are becoming more popular because of their flexibility and interpretability. Many randomized clinical trials report multiple outcomes with possible inherent correlations. Moreover, MTC data are typically sparse (although richer than standard meta-analysis, comparing only two…

  5. Analysis of statistical misconception in terms of statistical reasoning

    NASA Astrophysics Data System (ADS)

    Maryati, I.; Priatna, N.

    2018-05-01

    Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.

  6. Standardization of infrared breast thermogram acquisition protocols and abnormality analysis of breast thermograms

    NASA Astrophysics Data System (ADS)

    Bhowmik, Mrinal Kanti; Gogoi, Usha Rani; Das, Kakali; Ghosh, Anjan Kumar; Bhattacharjee, Debotosh; Majumdar, Gautam

    2016-05-01

    The non-invasive, painless, radiation-free and cost-effective infrared breast thermography (IBT) makes a significant contribution to improving the survival rate of breast cancer patients by early detecting the disease. This paper presents a set of standard breast thermogram acquisition protocols to improve the potentiality and accuracy of infrared breast thermograms in early breast cancer detection. By maintaining all these protocols, an infrared breast thermogram acquisition setup has been established at the Regional Cancer Centre (RCC) of Government Medical College (AGMC), Tripura, India. The acquisition of breast thermogram is followed by the breast thermogram interpretation, for identifying the presence of any abnormality. However, due to the presence of complex vascular patterns, accurate interpretation of breast thermogram is a very challenging task. The bilateral symmetry of the thermal patterns in each breast thermogram is quantitatively computed by statistical feature analysis. A series of statistical features are extracted from a set of 20 thermograms of both healthy and unhealthy subjects. Finally, the extracted features are analyzed for breast abnormality detection. The key contributions made by this paper can be highlighted as -- a) the designing of a standard protocol suite for accurate acquisition of breast thermograms, b) creation of a new breast thermogram dataset by maintaining the protocol suite, and c) statistical analysis of the thermograms for abnormality detection. By doing so, this proposed work can minimize the rate of false findings in breast thermograms and thus, it will increase the utilization potentiality of breast thermograms in early breast cancer detection.

  7. Jordanian twelfth-grade science teachers' self-reported usage of science and engineering practices in the next generation science standards

    NASA Astrophysics Data System (ADS)

    Malkawi, Amal Reda; Rababah, Ebtesam Qassim

    2018-06-01

    This study investigated the degree that Science and Engineering Practices (SEPs) criteria from the Next Generation Science Standards (NGSS) were included in self-reported teaching practices of twelfth-grade science teachers in Jordan. This study sampled (n = 315) science teachers recruited from eight different public school directorates. The sample was surveyed using an instrument adapted from Kawasaki (2015). Results found that Jordanian science teachers incorporate (SEPs) in their classroom teaching at only a moderate level. SEPs applied most frequently included 'using the diagram, table or graphic through instructions to clarify the subject of a new science,' and to 'discuss with the students how to interpret the quantitative data from the experiment or investigation'. The practice with the lowest frequency was 'teach a lesson on interpreting statistics or quantitative data,' which was moderately applied. No statistically significant differences at (α = 0.05) were found among these Jordanian science teachers' self-estimations of (SEP) application into their own teaching according to the study's demographic variables (specialisation, educational qualification, teaching experience). However, a statistically significant difference at (α = 0.05) was found among Jordanian high school science teachers' practice means based on gender, with female teachers using SEPs at a higher rate than male teachers.

  8. "Magnitude-based inference": a statistical review.

    PubMed

    Welsh, Alan H; Knight, Emma J

    2015-04-01

    We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.

  9. Applying Beliefs and Resources Frameworks to the Psychometric Analyses of an Epistemology Survey

    ERIC Educational Resources Information Center

    Yerdelen-Damar, Sevda; Elby, Andrew; Eryilmaz, Ali

    2012-01-01

    This study explored how researchers' views about the form of students' epistemologies influence how the researchers develop and refine surveys and how they interpret survey results. After running standard statistical analyses on 505 physics students' responses to the Turkish version of the Maryland Physics Expectations-II survey, probing students'…

  10. Paleomagnetism.org: An online multi-platform open source environment for paleomagnetic data analysis

    NASA Astrophysics Data System (ADS)

    Koymans, Mathijs R.; Langereis, Cor G.; Pastor-Galán, Daniel; van Hinsbergen, Douwe J. J.

    2016-08-01

    This contribution provides an overview of Paleomagnetism.org, an open-source, multi-platform online environment for paleomagnetic data analysis. Paleomagnetism.org provides an interactive environment where paleomagnetic data can be interpreted, evaluated, visualized, and exported. The Paleomagnetism.org application is split in to an interpretation portal, a statistics portal, and a portal for miscellaneous paleomagnetic tools. In the interpretation portal, principle component analysis can be performed on visualized demagnetization diagrams. Interpreted directions and great circles can be combined to find great circle solutions. These directions can be used in the statistics portal, or exported as data and figures. The tools in the statistics portal cover standard Fisher statistics for directions and VGPs, including other statistical parameters used as reliability criteria. Other available tools include an eigenvector approach foldtest, two reversal test including a Monte Carlo simulation on mean directions, and a coordinate bootstrap on the original data. An implementation is included for the detection and correction of inclination shallowing in sediments following TK03.GAD. Finally we provide a module to visualize VGPs and expected paleolatitudes, declinations, and inclinations relative to widely used global apparent polar wander path models in coordinates of major continent-bearing plates. The tools in the miscellaneous portal include a net tectonic rotation (NTR) analysis to restore a body to its paleo-vertical and a bootstrapped oroclinal test using linear regressive techniques, including a modified foldtest around a vertical axis. Paleomagnetism.org provides an integrated approach for researchers to work with visualized (e.g. hemisphere projections, Zijderveld diagrams) paleomagnetic data. The application constructs a custom exportable file that can be shared freely and included in public databases. This exported file contains all data and can later be imported to the application by other researchers. The accessibility and simplicity through which paleomagnetic data can be interpreted, analyzed, visualized, and shared makes Paleomagnetism.org of interest to the community.

  11. Assessing compositional variability through graphical analysis and Bayesian statistical approaches: case studies on transgenic crops.

    PubMed

    Harrigan, George G; Harrison, Jay M

    2012-01-01

    New transgenic (GM) crops are subjected to extensive safety assessments that include compositional comparisons with conventional counterparts as a cornerstone of the process. The influence of germplasm, location, environment, and agronomic treatments on compositional variability is, however, often obscured in these pair-wise comparisons. Furthermore, classical statistical significance testing can often provide an incomplete and over-simplified summary of highly responsive variables such as crop composition. In order to more clearly describe the influence of the numerous sources of compositional variation we present an introduction to two alternative but complementary approaches to data analysis and interpretation. These include i) exploratory data analysis (EDA) with its emphasis on visualization and graphics-based approaches and ii) Bayesian statistical methodology that provides easily interpretable and meaningful evaluations of data in terms of probability distributions. The EDA case-studies include analyses of herbicide-tolerant GM soybean and insect-protected GM maize and soybean. Bayesian approaches are presented in an analysis of herbicide-tolerant GM soybean. Advantages of these approaches over classical frequentist significance testing include the more direct interpretation of results in terms of probabilities pertaining to quantities of interest and no confusion over the application of corrections for multiple comparisons. It is concluded that a standardized framework for these methodologies could provide specific advantages through enhanced clarity of presentation and interpretation in comparative assessments of crop composition.

  12. CAN'T MISS--conquer any number task by making important statistics simple. Part 2. Probability, populations, samples, and normal distributions.

    PubMed

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 2, describes probability, populations, and samples. The uses of descriptive and inferential statistics are outlined. The article also discusses the properties and probability of normal distributions, including the standard normal distribution.

  13. Dilute Russel Viper Venom Time analysis in a Haematology Laboratory: An audit.

    PubMed

    Kruger, W; Meyer, P W A; Nel, J G

    2018-04-17

    To determine whether the current set of evaluation criteria used for dilute Russel Viper Venom Time (dRVVT) investigations in the routine laboratory meet expectation and identify possible shortcomings. All dRVVT assays requested from January 2015 to December 2015 were appraised in this cross-sectional study. The raw data panels were compared with the new reference interval, established in 2016, to determine the sequence of assays that should have been performed. The interpretive comments were audited, and false-negative reports identified. Interpretive comments according to three interpretation guidelines were compared. The reagent cost per assay was determined, and reagent cost wastage, due to redundant tests, was calculated. Only ~9% of dRVVT results authorized during 2015 had an interpretive comment included in the report. ~15% of these results were false-negative interpretations. There is a significant statistical difference in interpretive comments between the three interpretation methods. Redundant mixing tests resulted in R 7477.91 (~11%) reagent cost wastage in 2015. We managed to demonstrate very evident deficiencies in our own practice and managed to establish a standardized workflow that will potentially render our service more efficient and cost effective, aiding clinicians in making improved treatment decisions and diagnoses. Furthermore, it is essential that standard operating procedures be kept up to date and executed by all staff in the laboratory. © 2018 John Wiley & Sons Ltd.

  14. Statistics for Radiology Research.

    PubMed

    Obuchowski, Nancy A; Subhas, Naveen; Polster, Joshua

    2017-02-01

    Biostatistics is an essential component in most original research studies in imaging. In this article we discuss five key statistical concepts for study design and analyses in modern imaging research: statistical hypothesis testing, particularly focusing on noninferiority studies; imaging outcomes especially when there is no reference standard; dealing with the multiplicity problem without spending all your study power; relevance of confidence intervals in reporting and interpreting study results; and finally tools for assessing quantitative imaging biomarkers. These concepts are presented first as examples of conversations between investigator and biostatistician, and then more detailed discussions of the statistical concepts follow. Three skeletal radiology examples are used to illustrate the concepts. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  15. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable.

    PubMed

    Austin, Peter C; Steyerberg, Ewout W

    2012-06-20

    When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.

  16. Neutral vs positive oral contrast in diagnosing acute appendicitis with contrast-enhanced CT: sensitivity, specificity, reader confidence and interpretation time

    PubMed Central

    Naeger, D M; Chang, S D; Kolli, P; Shah, V; Huang, W; Thoeni, R F

    2011-01-01

    Objective The study compared the sensitivity, specificity, confidence and interpretation time of readers of differing experience in diagnosing acute appendicitis with contrast-enhanced CT using neutral vs positive oral contrast agents. Methods Contrast-enhanced CT for right lower quadrant or right flank pain was performed in 200 patients with neutral and 200 with positive oral contrast including 199 with proven acute appendicitis and 201 with other diagnoses. Test set disease prevalence was 50%. Two experienced gastrointestinal radiologists, one fellow and two first-year residents blindly assessed all studies for appendicitis (2000 readings) and assigned confidence scores (1=poor to 4=excellent). Receiver operating characteristic (ROC) curves were generated. Total interpretation time was recorded. Each reader's interpretation with the two agents was compared using standard statistical methods. Results Average reader sensitivity was found to be 96% (range 91–99%) with positive and 95% (89–98%) with neutral oral contrast; specificity was 96% (92–98%) and 94% (90–97%). For each reader, no statistically significant difference was found between the two agents (sensitivities p-values >0.6; specificities p-values>0.08), in the area under the ROC curve (range 0.95–0.99) or in average interpretation times. In cases without appendicitis, positive oral contrast demonstrated improved appendix identification (average 90% vs 78%) and higher confidence scores for three readers. Average interpretation times showed no statistically significant differences between the agents. Conclusion Neutral vs positive oral contrast does not affect the accuracy of contrast-enhanced CT for diagnosing acute appendicitis. Although positive oral contrast might help to identify normal appendices, we continue to use neutral oral contrast given its other potential benefits. PMID:20959365

  17. CAN'T MISS--conquer any number task by making important statistics simple. Part 1. Types of variables, mean, median, variance, and standard deviation.

    PubMed

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 1, presents basic information about data including a classification system that describes the four major types of variables: continuous quantitative variable, discrete quantitative variable, ordinal categorical variable (including the binomial variable), and nominal categorical variable. A histogram is a graph that displays the frequency distribution for a continuous variable. The article also demonstrates how to calculate the mean, median, standard deviation, and variance for a continuous variable.

  18. The Impact of APA and AERA Guidelines on Effect Size Reporting

    ERIC Educational Resources Information Center

    Peng, Chao-Ying Joanne; Chen, Li-Ting; Chiang, Hsu-Min; Chiang, Yi-Chen

    2013-01-01

    Given the long history of effect size (ES) indices (Olejnik and Algina, "Contemporary Educational Psychology," 25, 241-286 2000) and various attempts by APA and AERA to encourage the reporting and interpretation of ES to supplement findings from inferential statistical analyses, it is essential to document the impact of APA and AERA standards on…

  19. A bibliometric analysis of statistical terms used in American Physical Therapy Association journals (2011-2012): evidence for educating physical therapists.

    PubMed

    Tilson, Julie K; Marshall, Katie; Tam, Jodi J; Fetters, Linda

    2016-04-22

    A primary barrier to the implementation of evidence based practice (EBP) in physical therapy is therapists' limited ability to understand and interpret statistics. Physical therapists demonstrate limited skills and report low self-efficacy for interpreting results of statistical procedures. While standards for physical therapist education include statistics, little empirical evidence is available to inform what should constitute such curricula. The purpose of this study was to conduct a census of the statistical terms and study designs used in physical therapy literature and to use the results to make recommendations for curricular development in physical therapist education. We conducted a bibliometric analysis of 14 peer-reviewed journals associated with the American Physical Therapy Association over 12 months (Oct 2011-Sept 2012). Trained raters recorded every statistical term appearing in identified systematic reviews, primary research reports, and case series and case reports. Investigator-reported study design was also recorded. Terms representing the same statistical test or concept were combined into a single, representative term. Cumulative percentage was used to identify the most common representative statistical terms. Common representative terms were organized into eight categories to inform curricular design. Of 485 articles reviewed, 391 met the inclusion criteria. These 391 articles used 532 different terms which were combined into 321 representative terms; 13.1 (sd = 8.0) terms per article. Eighty-one representative terms constituted 90% of all representative term occurrences. Of the remaining 240 representative terms, 105 (44%) were used in only one article. The most common study design was prospective cohort (32.5%). Physical therapy literature contains a large number of statistical terms and concepts for readers to navigate. However, in the year sampled, 81 representative terms accounted for 90% of all occurrences. These "common representative terms" can be used to inform curricula to promote physical therapists' skills, competency, and confidence in interpreting statistics in their professional literature. We make specific recommendations for curriculum development informed by our findings.

  20. Statistical Data Editing in Scientific Articles.

    PubMed

    Habibzadeh, Farrokh

    2017-07-01

    Scientific journals are important scholarly forums for sharing research findings. Editors have important roles in safeguarding standards of scientific publication and should be familiar with correct presentation of results, among other core competencies. Editors do not have access to the raw data and should thus rely on clues in the submitted manuscripts. To identify probable errors, they should look for inconsistencies in presented results. Common statistical problems that can be picked up by a knowledgeable manuscript editor are discussed in this article. Manuscripts should contain a detailed section on statistical analyses of the data. Numbers should be reported with appropriate precisions. Standard error of the mean (SEM) should not be reported as an index of data dispersion. Mean (standard deviation [SD]) and median (interquartile range [IQR]) should be used for description of normally and non-normally distributed data, respectively. If possible, it is better to report 95% confidence interval (CI) for statistics, at least for main outcome variables. And, P values should be presented, and interpreted with caution, if there is a hypothesis. To advance knowledge and skills of their members, associations of journal editors are better to develop training courses on basic statistics and research methodology for non-experts. This would in turn improve research reporting and safeguard the body of scientific evidence. © 2017 The Korean Academy of Medical Sciences.

  1. Using the Halstead-Reitan Battery to diagnose brain damage: a comparison of the predictive power of traditional techniques to Rohling's Interpretive Method.

    PubMed

    Rohling, Martin L; Williamson, David J; Miller, L Stephen; Adams, Russell L

    2003-11-01

    The aim of this project was to validate an alternative global measure of neurocognitive impairment (Rohling Interpretive Method, or RIM) that could be generated from data gathered from a flexible battery approach. A critical step in this process is to establish the utility of the technique against current standards in the field. In this paper, we compared results from the Rohling Interpretive Method to those obtained from the General Neuropsychological Deficit Scale (GNDS; Reitan & Wolfson, 1988) and the Halstead-Russell Average Impairment Rating (AIR; Russell, Neuringer & Goldstein, 1970) on a large previously published sample of patients assessed with the Halstead-Reitan Battery (HRB). Findings support the use of the Rohling Interpretive Method in producing summary statistics similar in diagnostic sensitivity and specificity to the traditional HRB indices.

  2. “Magnitude-based Inference”: A Statistical Review

    PubMed Central

    Welsh, Alan H.; Knight, Emma J.

    2015-01-01

    ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387

  3. Is the statistic value all we should care about in neuroimaging?

    PubMed

    Chen, Gang; Taylor, Paul A; Cox, Robert W

    2017-02-15

    Here we address an important issue that has been embedded within the neuroimaging community for a long time: the absence of effect estimates in results reporting in the literature. The statistic value itself, as a dimensionless measure, does not provide information on the biophysical interpretation of a study, and it certainly does not represent the whole picture of a study. Unfortunately, in contrast to standard practice in most scientific fields, effect (or amplitude) estimates are usually not provided in most results reporting in the current neuroimaging publications and presentations. Possible reasons underlying this general trend include (1) lack of general awareness, (2) software limitations, (3) inaccurate estimation of the BOLD response, and (4) poor modeling due to our relatively limited understanding of FMRI signal components. However, as we discuss here, such reporting damages the reliability and interpretability of the scientific findings themselves, and there is in fact no overwhelming reason for such a practice to persist. In order to promote meaningful interpretation, cross validation, reproducibility, meta and power analyses in neuroimaging, we strongly suggest that, as part of good scientific practice, effect estimates should be reported together with their corresponding statistic values. We provide several easily adaptable recommendations for facilitating this process. Published by Elsevier Inc.

  4. [Do different interpretative methods used for evaluation of checkerboard synergy test affect the results?].

    PubMed

    Ozseven, Ayşe Gül; Sesli Çetin, Emel; Ozseven, Levent

    2012-07-01

    In recent years, owing to the presence of multi-drug resistant nosocomial bacteria, combination therapies are more frequently applied. Thus there is more need to investigate the in vitro activity of drug combinations against multi-drug resistant bacteria. Checkerboard synergy testing is among the most widely used standard technique to determine the activity of antibiotic combinations. It is based on microdilution susceptibility testing of antibiotic combinations. Although this test has a standardised procedure, there are many different methods for interpreting the results. In many previous studies carried out with multi-drug resistant bacteria, different rates of synergy have been reported with various antibiotic combinations using checkerboard technique. These differences might be attributed to the different features of the strains. However, different synergy rates detected by checkerboard method have also been reported in other studies using the same drug combinations and same types of bacteria. It was thought that these differences in synergy rates might be due to the different methods of interpretation of synergy test results. In recent years, multi-drug resistant Acinetobacter baumannii has been the most commonly encountered nosocomial pathogen especially in intensive-care units. For this reason, multidrug resistant A.baumannii has been the subject of a considerable amount of research about antimicrobial combinations. In the present study, the in vitro activities of frequently preferred combinations in A.baumannii infections like imipenem plus ampicillin/sulbactam, and meropenem plus ampicillin/sulbactam were tested by checkerboard synergy method against 34 multi-drug resistant A.baumannii isolates. Minimum inhibitory concentration (MIC) values for imipenem, meropenem and ampicillin/sulbactam were determined by the broth microdilution method. Subsequently the activity of two different combinations were tested in the dilution range of 4 x MIC and 0.03 x MIC in 96-well checkerboard plates. The results were obtained separately using the four different interpretation methods frequently preferred by researchers. Thus, it was aimed to detect to what extent the rates of synergistic, indifferent and antagonistic interactions were affected by different interpretation methods. The differences between the interpretation methods were tested by chi-square analysis for each combination used. Statistically significant differences were detected between the four different interpretation methods for the determination of synergistic and indifferent interactions (p< 0.0001). Highest rates of synergy were observed with both combinations by the method that used the lowest fractional inhibitory concentration index of all the non-turbid wells along the turbidity/non-turbidity interface. There was no statistically significant difference between the four methods for the detection of antagonism (p> 0.05). In conclusion although there is a standard procedure for checkerboard synergy testing it fails to exhibit standard results owing to different methods of interpretation of the results. Thus, there is a need to standardise the interpretation method for checkerboard synergy testing. To determine the most appropriate method of interpretation further studies investigating the clinical benefits of synergic combinations and additionally comparing the consistency of the results obtained from the other standard combination tests like time-kill studies, are required.

  5. The challenging use and interpretation of circulating biomarkers of exposure to persistent organic pollutants in environmental health: Comparison of lipid adjustment approaches in a case study related to endometriosis.

    PubMed

    Cano-Sancho, German; Labrune, Léa; Ploteau, Stéphane; Marchand, Philippe; Le Bizec, Bruno; Antignac, Jean-Philippe

    2018-06-01

    The gold-standard matrix for measuring the internal levels of persistent organic pollutants (POPs) is the adipose tissue, however in epidemiological studies the use of serum is preferred due to the low cost and higher accessibility. The interpretation of serum biomarkers is tightly related to the understanding of the underlying causal structure relating the POPs, serum lipids and the disease. Considering the extended benefits of using serum biomarkers we aimed to further examine if through statistical modelling we would be able to improve the use and interpretation of serum biomarkers in the study of endometriosis. Hence, we have conducted a systematic comparison of statistical approaches commonly used to lipid-adjust the circulating biomarkers of POPs based on existing methods, using data from a pilot case-control study focused on severe deep infiltrating endometriosis. The odds ratios (ORs) obtained from unconditional regression for those models with serum biomarkers were further compared to those obtained from adipose tissue. The results of this exploratory study did not support the use of blood biomarkers as proxy estimates of POPs in adipose tissue to implement in risk models for endometriosis with the available statistical approaches to correct for lipids. The current statistical approaches commonly used to lipid-adjust circulating POPs, do not fully represent the underlying biological complexity between POPs, lipids and disease (especially those directly or indirectly affecting or affected by lipid metabolism). Hence, further investigations are warranted to improve the use and interpretation of blood biomarkers under complex scenarios of lipid dynamics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Central Core Laboratory versus Site Interpretation of Coronary CT Angiography: Agreement and Association with Cardiovascular Events in the PROMISE Trial.

    PubMed

    Lu, Michael T; Meyersohn, Nandini M; Mayrhofer, Thomas; Bittner, Daniel O; Emami, Hamed; Puchner, Stefan B; Foldyna, Borek; Mueller, Martin E; Hearne, Steven; Yang, Clifford; Achenbach, Stephan; Truong, Quynh A; Ghoshhajra, Brian B; Patel, Manesh R; Ferencik, Maros; Douglas, Pamela S; Hoffmann, Udo

    2018-04-01

    Purpose To assess concordance and relative prognostic utility between central core laboratory and local site interpretation for significant coronary artery disease (CAD) and cardiovascular events. Materials and Methods In the Prospective Multicenter Imaging Study for Evaluation of Chest Pain (PROMISE) trial, readers at 193 North American sites interpreted coronary computed tomographic (CT) angiography as part of the clinical evaluation of stable chest pain. Readers at a central core laboratory also interpreted CT angiography blinded to clinical data, site interpretation, and outcomes. Significant CAD was defined as stenosis greater than or equal to 50%; cardiovascular events were defined as a composite of cardiovascular death or myocardial infarction. Results In 4347 patients (51.8% women; mean age ± standard deviation, 60.4 years ± 8.2), core laboratory and site interpretations were discordant in 16% (683 of 4347), most commonly because of a finding of significant CAD by site but not by core laboratory interpretation (80%, 544 of 683). Overall, core laboratory interpretation resulted in 41% fewer patients being reported as having significant CAD (14%, 595 of 4347 vs 23%, 1000 of 4347; P < .001). Over a median follow-up period of 25 months, 1.3% (57 of 4347) sustained myocardial infarction or cardiovascular death. The C statistic for future myocardial infarction or cardiovascular death was 0.61 (95% confidence interval [CI]: 0.54, 0.68) for the core laboratory and 0.63 (95% CI: 0.56, 0.70) for the sites. Conclusion Compared with interpretation by readers at 193 North American sites, standardized core laboratory interpretation classified 41% fewer patients as having significant CAD. © RSNA, 2017 Online supplemental material is available for this article. Clinical trial registration no. NCT01174550.

  7. The Equivalence of Regression Models Using Difference Scores and Models Using Separate Scores for Each Informant: Implications for the Study of Informant Discrepancies

    ERIC Educational Resources Information Center

    Laird, Robert D.; Weems, Carl F.

    2011-01-01

    Research on informant discrepancies has increasingly utilized difference scores. This article demonstrates the statistical equivalence of regression models using difference scores (raw or standardized) and regression models using separate scores for each informant to show that interpretations should be consistent with both models. First,…

  8. The Impact of Linking Distinct Achievement Test Scores on the Interpretation of Student Growth in Achievement

    ERIC Educational Resources Information Center

    Airola, Denise Tobin

    2011-01-01

    Changes to state tests impact the ability of State Education Agencies (SEAs) to monitor change in performance over time. The purpose of this study was to evaluate the Standardized Performance Growth Index (PGIz), a proposed statistical model for measuring change in student and school performance, across transitions in tests. The PGIz is a…

  9. The p-Value You Can't Buy.

    PubMed

    Demidenko, Eugene

    2016-01-02

    There is growing frustration with the concept of the p -value. Besides having an ambiguous interpretation, the p- value can be made as small as desired by increasing the sample size, n . The p -value is outdated and does not make sense with big data: Everything becomes statistically significant. The root of the problem with the p- value is in the mean comparison. We argue that statistical uncertainty should be measured on the individual, not the group, level. Consequently, standard deviation (SD), not standard error (SE), error bars should be used to graphically present the data on two groups. We introduce a new measure based on the discrimination of individuals/objects from two groups, and call it the D -value. The D -value can be viewed as the n -of-1 p -value because it is computed in the same way as p while letting n equal 1. We show how the D -value is related to discrimination probability and the area above the receiver operating characteristic (ROC) curve. The D -value has a clear interpretation as the proportion of patients who get worse after the treatment, and as such facilitates to weigh up the likelihood of events under different scenarios. [Received January 2015. Revised June 2015.].

  10. Probabilistic registration of an unbiased statistical shape model to ultrasound images of the spine

    NASA Astrophysics Data System (ADS)

    Rasoulian, Abtin; Rohling, Robert N.; Abolmaesumi, Purang

    2012-02-01

    The placement of an epidural needle is among the most difficult regional anesthetic techniques. Ultrasound has been proposed to improve success of placement. However, it has not become the standard-of-care because of limitations in the depictions and interpretation of the key anatomical features. We propose to augment the ultrasound images with a registered statistical shape model of the spine to aid interpretation. The model is created with a novel deformable group-wise registration method which utilizes a probabilistic approach to register groups of point sets. The method is compared to a volume-based model building technique and it demonstrates better generalization and compactness. We instantiate and register the shape model to a spine surface probability map extracted from the ultrasound images. Validation is performed on human subjects. The achieved registration accuracy (2-4 mm) is sufficient to guide the choice of puncture site and trajectory of an epidural needle.

  11. Standards for reporting fish toxicity tests

    USGS Publications Warehouse

    Cope, O.B.

    1961-01-01

    The growing impetus of studies on fish and pesticides focuses attention on the need for standardized reporting procedures. Good methods have been developed for laboratory and field procedures in testing programs and in statistical features of assay experiments; and improvements are being made on methods of collecting and preserving fish, invertebrates, and other materials exposed to economic poisons. On the other had, the reporting of toxicity data in a complete manner has lagged behind, and today's literature is little improved over yesterday's with regard to completeness and susceptibility to interpretation.

  12. Statistical prediction with Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1989-01-01

    A new viewpoint of the processing performed by Kanerva's sparse distributed memory (SDM) is presented. In conditions of near- or over-capacity, where the associative-memory behavior of the model breaks down, the processing performed by the model can be interpreted as that of a statistical predictor. Mathematical results are presented which serve as the framework for a new statistical viewpoint of sparse distributed memory and for which the standard formulation of SDM is a special case. This viewpoint suggests possible enhancements to the SDM model, including a procedure for improving the predictiveness of the system based on Holland's work with genetic algorithms, and a method for improving the capacity of SDM even when used as an associative memory.

  13. Digital Image Quality And Interpretability: Database And Hardcopy Studies

    NASA Astrophysics Data System (ADS)

    Snyder, H. L.; Maddox, M. E.; Shedivy, D. I.; Turpin, J. A.; Burke, J. J.; Strickland, R. N.

    1982-02-01

    Two hundred fifty transparencies, displaying a new digital database consisting of 25 degraded versions (5 blur levels x 5 noise levels) of each of 10 digitized, first-generation positive transparencies, were used in two experiments involving 15 trained military photointer-preters. Each image is 86 mm square and represents 40962 8-bit pixels. In the "interpretation" experiment, each photointerpreter (judge) spent approximately two days extracting essential elements of information (EEls) from one degraded version of each scene at a constant Gaussian blur level (FWHM = 40, 84, or 322 Am). In the scaling experiment, each judge assigned a numerical value to each of the 250 images, according to its perceived position on a 10-point NATO-standardized scale (0 = useless through 9 = nearly perfect), to the nearest 0.1 unit. Eighty-eight of the 100 possible values were used by the judges, indicating that 62 categories, based on the Shannon-Wiener measure of information, are needed to scale these hardcopy images. The overall correlation between the scaling and interpretation results was 0.9. Though the main effect of blur was not statistically significant in the interpretation experiment, that of noise was significant, and all main factors (blur, noise, scene, order of battle) and most interactions were statistically significant in the scaling experiment.

  14. Inference of median difference based on the Box-Cox model in randomized clinical trials.

    PubMed

    Maruo, K; Isogawa, N; Gosho, M

    2015-05-10

    In randomized clinical trials, many medical and biological measurements are not normally distributed and are often skewed. The Box-Cox transformation is a powerful procedure for comparing two treatment groups for skewed continuous variables in terms of a statistical test. However, it is difficult to directly estimate and interpret the location difference between the two groups on the original scale of the measurement. We propose a helpful method that infers the difference of the treatment effect on the original scale in a more easily interpretable form. We also provide statistical analysis packages that consistently include an estimate of the treatment effect, covariance adjustments, standard errors, and statistical hypothesis tests. The simulation study that focuses on randomized parallel group clinical trials with two treatment groups indicates that the performance of the proposed method is equivalent to or better than that of the existing non-parametric approaches in terms of the type-I error rate and power. We illustrate our method with cluster of differentiation 4 data in an acquired immune deficiency syndrome clinical trial. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Assessment issues in the testing of children at school entry.

    PubMed

    Rock, Donald A; Stenner, A Jackson

    2005-01-01

    The authors introduce readers to the research documenting racial and ethnic gaps in school readiness. They describe the key tests, including the Peabody Picture Vocabulary Test (PPVT), the Early Childhood Longitudinal Study (ECLS), and several intelligence tests, and describe how they have been administered to several important national samples of children. Next, the authors review the different estimates of the gaps and discuss how to interpret these differences. In interpreting test results, researchers use the statistical term "standard deviation" to compare scores across the tests. On average, the tests find a gap of about 1 standard deviation. The ECLS-K estimate is the lowest, about half a standard deviation. The PPVT estimate is the highest, sometimes more than 1 standard deviation. When researchers adjust those gaps statistically to take into account different outside factors that might affect children's test scores, such as family income or home environment, the gap narrows but does not disappear. Why such different estimates of the gap? The authors consider explanations such as differences in the samples, racial or ethnic bias in the tests, and whether the tests reflect different aspects of school "readiness," and conclude that none is likely to explain the varying estimates. Another possible explanation is the Spearman Hypothesis-that all tests are imperfect measures of a general ability construct, g; the more highly a given test correlates with g, the larger the gap will be. But the Spearman Hypothesis, too, leaves questions to be investigated. A gap of 1 standard deviation may not seem large, but the authors show clearly how it results in striking disparities in the performance of black and white students and why it should be of serious concern to policymakers.

  16. Abbreviated Combined MR Protocol: A New Faster Strategy for Characterizing Breast Lesions.

    PubMed

    Moschetta, Marco; Telegrafo, Michele; Rella, Leonarda; Stabile Ianora, Amato Antonio; Angelelli, Giuseppe

    2016-06-01

    The use of an abbreviated magnetic resonance (MR) protocol has been recently proposed for cancer screening. The aim of our study is to evaluate the diagnostic accuracy of an abbreviated MR protocol combining short TI inversion recovery (STIR), turbo-spin-echo (TSE)-T2 sequences, a pre-contrast T1, and a single intermediate (3 minutes after contrast injection) post-contrast T1 sequence for characterizing breast lesions. A total of 470 patients underwent breast MR examination for screening, problem solving, or preoperative staging. Two experienced radiologists evaluated both standard and abbreviated protocols in consensus. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and diagnostic accuracy for both protocols were calculated (with the histological findings and 6-month ultrasound follow-up as the reference standard) and compared with the McNemar test. The post-processing and interpretation times for the MR images were compared with the paired t test. In 177 of 470 (38%) patients, the MR sequences detected 185 breast lesions. Standard and abbreviated protocols obtained sensitivity, specificity, diagnostic accuracy, PPV, and NPV values respectively of 92%, 92%, 92%, 68%, and 98% and of 89%, 91%, 91%, 64%, and 98% with no statistically significant difference (P < .0001). The mean post-processing and interpretation time were, respectively, 7 ± 1 minutes and 6 ± 3.2 minutes for the standard protocol and 1 ± 1.2 minutes and 2 ± 1.2 minutes for the abbreviated protocol, with a statistically significant difference (P < .01). An abbreviated combined MR protocol represents a time-saving tool for radiologists and patients with the same diagnostic potential as the standard protocol in patients undergoing breast MRI for screening, problem solving, or preoperative staging. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable

    PubMed Central

    2012-01-01

    Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998

  18. Instruction in the Fine and Manual Arts in the United States: A Statistical Monograph. Bulletin, 1909, No. 6. Whole Number 406

    ERIC Educational Resources Information Center

    Bailey, Henry Turner

    1909-01-01

    Art instruction aims to raise the standard of taste. It includes instruction in seeing and interpreting the beautiful in nature and the arts, in drawing, both free-hand and instrumental, in designing, coloring, and modeling, in manipulating paper, cloth, leather, wood, metal, or other materials, to produce a result having elements of beauty. Art…

  19. Using the Sampling Margin of Error to Assess the Interpretative Validity of Student Evaluations of Teaching

    ERIC Educational Resources Information Center

    James, David E.; Schraw, Gregory; Kuch, Fred

    2015-01-01

    We present an equation, derived from standard statistical theory, that can be used to estimate sampling margin of error for student evaluations of teaching (SETs). We use the equation to examine the effect of sample size, response rates and sample variability on the estimated sampling margin of error, and present results in four tables that allow…

  20. Comparison of ambulatory blood pressure reference standards in children evaluated for hypertension.

    PubMed

    Jones, Deborah P; Richey, Phyllis A; Alpert, Bruce S

    2009-06-01

    The purpose of this study was to systematically compare methods for standardization of blood pressure levels obtained by ambulatory blood pressure monitoring (ABPM) in a group of 111 children studied at our institution. Blood pressure indices, blood pressure loads and standard deviation scores were calculated using the original ABPM and the modified reference standards. Bland-Altman plots and kappa statistics for the level of agreement were generated. Overall, the agreement between the two methods was excellent; however, approximately 5% of children were classified differently by one as compared with the other method. Depending on which version of the German Working Group's reference standards is used for interpretation of ABPM data, the classification of the individual as having hypertension or normal blood pressure may vary.

  1. Comparison of ambulatory blood pressure reference standards in children evaluated for hypertension

    PubMed Central

    Jones, Deborah P.; Richey, Phyllis A.; Alpert, Bruce S.

    2009-01-01

    Objective The purpose of this study was to systematically compare methods for standardization of blood pressure levels obtained by ambulatory blood pressure monitoring (ABPM) in a group of 111 children studied at our institution. Methods Blood pressure indices, blood pressure loads and standard deviation scores were calculated using he original ABPM and the modified reference standards. Bland—Altman plots and kappa statistics for the level of agreement were generated. Results Overall, the agreement between the two methods was excellent; however, approximately 5% of children were classified differently by one as compared with the other method. Conclusion Depending on which version of the German Working Group’s reference standards is used for interpretation of ABPM data, the classification of the individual as having hypertension or normal blood pressure may vary. PMID:19433980

  2. Towards Accurate Modelling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-04-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter halos. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the "accurate" regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard ΛCDM + halo model against the clustering of SDSS DR7 galaxies. Specifically, we use the projected correlation function, group multiplicity function and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir halos) matches the clustering of low luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the "standard" halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  3. MR imaging of knees having isolated and combined ligament injuries.

    PubMed

    Rubin, D A; Kettering, J M; Towers, J D; Britton, C A

    1998-05-01

    Although clinical evaluation and MR imaging both accurately reveal injuries in knees with isolated ligament tears, physical examination becomes progressively less reliable when multiple lesions exist. We investigated the accuracy of MR imaging of knees having varying degrees and numbers of ligament injuries. We prospectively interpreted the MR images of 340 consecutive injured knees and compared these interpretations with the results of subsequent arthroscopy or open surgery, which served as the gold standard. Our interpretations of MR images focused on five soft-tissue supporting structures (the two cruciate ligaments, the two collateral ligaments, and the patellar tendon) and the two menisci. Patients were divided into three groups: no ligament injuries, single ligament injuries, and multiple ligament injuries. Using MR imaging, we found overall sensitivity and specificity for diagnosing ligament tears to be 94% and 99%, respectively, when no or one ligament was torn and 88% and 84%, respectively, when two or more supporting structures were torn. The difference in specificity was statistically significant (p < .0001). Sensitivity for diagnosing meniscal tears decreased as the number of injured structures increased, but the relationship achieved statistical significance (p = .001) only for the medial meniscus. For all categories of injury, MR imaging was more accurate than clinical evaluation, statistics for which were taken from the orthopedic literature. In knees with multiple ligament injuries, the diagnostic specificity of MR imaging for ligament tears decreases, as does the sensitivity for medial meniscal tears.

  4. [Triple-type theory of statistics and its application in the scientific research of biomedicine].

    PubMed

    Hu, Liang-ping; Liu, Hui-gang

    2005-07-20

    To point out the crux of why so many people failed to grasp statistics and to bring forth a "triple-type theory of statistics" to solve the problem in a creative way. Based on the experience in long-time teaching and research in statistics, the "three-type theory" was raised and clarified. Examples were provided to demonstrate that the 3 types, i.e., expressive type, prototype and the standardized type are the essentials for people to apply statistics rationally both in theory and practice, and moreover, it is demonstrated by some instances that the "three types" are correlated with each other. It can help people to see the essence by interpreting and analyzing the problems of experimental designs and statistical analyses in medical research work. Investigations reveal that for some questions, the three types are mutually identical; for some questions, the prototype is their standardized type; however, for some others, the three types are distinct from each other. It has been shown that in some multifactor experimental researches, it leads to the nonexistence of the standardized type corresponding to the prototype at all, because some researchers have committed the mistake of "incomplete control" in setting experimental groups. This is a problem which should be solved by the concept and method of "division". Once the "triple-type" for each question is clarified, a proper experimental design and statistical method can be carried out easily. "Triple-type theory of statistics" can help people to avoid committing statistical mistakes or at least to decrease the misuse rate dramatically and improve the quality, level and speed of biomedical research during the process of applying statistics. It can also help people to improve the quality of statistical textbooks and the teaching effect of statistics and it has demonstrated how to advance biomedical statistics.

  5. Adopting a Patient-Centered Approach to Primary Outcome Analysis of Acute Stroke Trials by Use of a Utility-Weighted Modified Rankin Scale

    PubMed Central

    Chaisinanunkul, Napasri; Adeoye, Opeolu; Lewis, Roger J.; Grotta, James C.; Broderick, Joseph; Jovin, Tudor G.; Nogueira, Raul G.; Elm, Jordan; Graves, Todd; Berry, Scott; Lees, Kennedy R.; Barreto, Andrew D.; Saver, Jeffrey L.

    2015-01-01

    Background and Purpose Although the modified Rankin Scale (mRS) is the most commonly employed primary endpoint in acute stroke trials, its power is limited when analyzed in dichotomized fashion and its indication of effect size challenging to interpret when analyzed ordinally. Weighting the seven Rankin levels by utilities may improve scale interpretability while preserving statistical power. Methods A utility weighted mRS (UW-mRS) was derived by averaging values from time-tradeoff (patient centered) and person-tradeoff (clinician centered) studies. The UW-mRS, standard ordinal mRS, and dichotomized mRS were applied to 11 trials or meta-analyses of acute stroke treatments, including lytic, endovascular reperfusion, blood pressure moderation, and hemicraniectomy interventions. Results Utility values were: mRS 0–1.0; mRS 1 - 0.91; mRS 2 - 0.76; mRS 3 - 0.65; mRS 4 - 0.33; mRS 5 & 6 - 0. For trials with unidirectional treatment effects, the UW-mRS paralleled the ordinal mRS and outperformed dichotomous mRS analyses. Both the UW-mRS and the ordinal mRS were statistically significant in six of eight unidirectional effect trials, while dichotomous analyses were statistically significant in two to four of eight. In bidirectional effect trials, both the UW-mRS and ordinal tests captured the divergent treatment effects by showing neutral results whereas some dichotomized analyses showed positive results. Mean utility differences in trials with statistically significant positive results ranged from 0.026 to 0.249. Conclusion A utility-weighted mRS performs similarly to the standard ordinal mRS in detecting treatment effects in actual stroke trials and ensures the quantitative outcome is a valid reflection of patient-centered benefits. PMID:26138130

  6. Jets and Metastability in Quantum Mechanics and Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Farhi, David

    I give a high level overview of the state of particle physics in the introduction, accessible without any background in the field. I discuss improvements of theoretical and statistical methods used for collider physics. These include telescoping jets, a statistical method which was claimed to allow jet searches to increase their sensitivity by considering several interpretations of each event. We find that indeed multiple interpretations extend the power of searches, for both simple counting experiments and powerful multivariate fitting experiments, at least for h → bb¯ at the LHC. Then I propose a method for automation of background calculations using SCET by appropriating the technology of Monte Carlo generators such as MadGraph. In the third chapter I change gears and discuss the future of the universe. It has long been known that our pocket of the standard model is unstable; there is a lower-energy configuration in a remote part of the configuration space, to which our universe will, eventually, decay. While the timescales involved are on the order of 10400 years (depending on how exactly one counts) and thus of no immediate worry, I discuss the shortcomings of the standard methods and propose a more physically motivated derivation for the decay rate. I then make various observations about the structure of decays in quantum field theory.

  7. SOCR: Statistics Online Computational Resource

    PubMed Central

    Dinov, Ivo D.

    2011-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR). This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student’s intuition and enhance their learning. PMID:21451741

  8. Statistical Knowledge and the Over-Interpretation of Student Evaluations of Teaching

    ERIC Educational Resources Information Center

    Boysen, Guy A.

    2017-01-01

    Research shows that teachers interpret small differences in student evaluations of teaching as meaningful even when available statistical information indicates that the differences are not reliable. The current research explored the effect of statistical training on college teachers' tendency to over-interpret student evaluation differences. A…

  9. Equivalent statistics and data interpretation.

    PubMed

    Francis, Gregory

    2017-08-01

    Recent reform efforts in psychological science have led to a plethora of choices for scientists to analyze their data. A scientist making an inference about their data must now decide whether to report a p value, summarize the data with a standardized effect size and its confidence interval, report a Bayes Factor, or use other model comparison methods. To make good choices among these options, it is necessary for researchers to understand the characteristics of the various statistics used by the different analysis frameworks. Toward that end, this paper makes two contributions. First, it shows that for the case of a two-sample t test with known sample sizes, many different summary statistics are mathematically equivalent in the sense that they are based on the very same information in the data set. When the sample sizes are known, the p value provides as much information about a data set as the confidence interval of Cohen's d or a JZS Bayes factor. Second, this equivalence means that different analysis methods differ only in their interpretation of the empirical data. At first glance, it might seem that mathematical equivalence of the statistics suggests that it does not matter much which statistic is reported, but the opposite is true because the appropriateness of a reported statistic is relative to the inference it promotes. Accordingly, scientists should choose an analysis method appropriate for their scientific investigation. A direct comparison of the different inferential frameworks provides some guidance for scientists to make good choices and improve scientific practice.

  10. Quasi-probabilities in conditioned quantum measurement and a geometric/statistical interpretation of Aharonov's weak value

    NASA Astrophysics Data System (ADS)

    Lee, Jaeha; Tsutsui, Izumi

    2017-05-01

    We show that the joint behavior of an arbitrary pair of (generally noncommuting) quantum observables can be described by quasi-probabilities, which are an extended version of the standard probabilities used for describing the outcome of measurement for a single observable. The physical situations that require these quasi-probabilities arise when one considers quantum measurement of an observable conditioned by some other variable, with the notable example being the weak measurement employed to obtain Aharonov's weak value. Specifically, we present a general prescription for the construction of quasi-joint probability (QJP) distributions associated with a given combination of observables. These QJP distributions are introduced in two complementary approaches: one from a bottom-up, strictly operational construction realized by examining the mathematical framework of the conditioned measurement scheme, and the other from a top-down viewpoint realized by applying the results of the spectral theorem for normal operators and their Fourier transforms. It is then revealed that, for a pair of simultaneously measurable observables, the QJP distribution reduces to the unique standard joint probability distribution of the pair, whereas for a noncommuting pair there exists an inherent indefiniteness in the choice of such QJP distributions, admitting a multitude of candidates that may equally be used for describing the joint behavior of the pair. In the course of our argument, we find that the QJP distributions furnish the space of operators in the underlying Hilbert space with their characteristic geometric structures such that the orthogonal projections and inner products of observables can be given statistical interpretations as, respectively, “conditionings” and “correlations”. The weak value Aw for an observable A is then given a geometric/statistical interpretation as either the orthogonal projection of A onto the subspace generated by another observable B, or equivalently, as the conditioning of A given B with respect to the QJP distribution under consideration.

  11. Interpretation of the rainbow color scale for quantitative medical imaging: perceptually linear color calibration (CSDF) versus DICOM GSDF

    NASA Astrophysics Data System (ADS)

    Chesterman, Frédérique; Manssens, Hannah; Morel, Céline; Serrell, Guillaume; Piepers, Bastian; Kimpe, Tom

    2017-03-01

    Medical displays for primary diagnosis are calibrated to the DICOM GSDF1 but there is no accepted standard today that describes how display systems for medical modalities involving color should be calibrated. Recently the Color Standard Display Function3,4 (CSDF), a calibration using the CIEDE2000 color difference metric to make a display as perceptually linear as possible has been proposed. In this work we present the results of a first observer study set up to investigate the interpretation accuracy of a rainbow color scale when a medical display is calibrated to CSDF versus DICOM GSDF and a second observer study set up to investigate the detectability of color differences when a medical display is calibrated to CSDF, DICOM GSDF and sRGB. The results of the first study indicate that the error when interpreting a rainbow color scale is lower for CSDF than for DICOM GSDF with statistically significant difference (Mann-Whitney U test) for eight out of twelve observers. The results correspond to what is expected based on CIEDE2000 color differences between consecutive colors along the rainbow color scale for both calibrations. The results of the second study indicate a statistical significant improvement in detecting color differences when a display is calibrated to CSDF compared to DICOM GSDF and a (non-significant) trend indicating improved detection for CSDF compared to sRGB. To our knowledge this is the first work that shows the added value of a perceptual color calibration method (CSDF) in interpreting medical color images using the rainbow color scale. Improved interpretation of the rainbow color scale may be beneficial in the area of quantitative medical imaging (e.g. PET SUV, quantitative MRI and CT and doppler US), where a medical specialist needs to interpret quantitative medical data based on a color scale and/or detect subtle color differences and where improved interpretation accuracy and improved detection of color differences may contribute to a better diagnosis. Our results indicate that for diagnostic applications involving both grayscale and color images, CSDF should be chosen over DICOM GSDF and sRGB as it assures excellent detection for color images and at the same time maintains DICOM GSDF for grayscale images.

  12. Conventionalism and Methodological Standards in Contending with Skepticism about Uncertainty

    NASA Astrophysics Data System (ADS)

    Brumble, K. C.

    2012-12-01

    What it means to measure and interpret confidence and uncertainty in a result is often particular to a specific scientific community and its methodology of verification. Additionally, methodology in the sciences varies greatly across disciplines and scientific communities. Understanding the accuracy of predictions of a particular science thus depends largely upon having an intimate working knowledge of the methods, standards, and conventions utilized and underpinning discoveries in that scientific field. Thus, valid criticism of scientific predictions and discoveries must be conducted by those who are literate in the field in question: they must have intimate working knowledge of the methods of the particular community and of the particular research under question. The interpretation and acceptance of uncertainty is one such shared, community-based convention. In the philosophy of science, this methodological and community-based way of understanding scientific work is referred to as conventionalism. By applying the conventionalism of historian and philosopher of science Thomas Kuhn to recent attacks upon methods of multi-proxy mean temperature reconstructions, I hope to illuminate how climate skeptics and their adherents fail to appreciate the need for community-based fluency in the methodological standards for understanding uncertainty shared by the wider climate science community. Further, I will flesh out a picture of climate science community standards of evidence and statistical argument following the work of philosopher of science Helen Longino. I will describe how failure to appreciate the conventions of professionalism and standards of evidence accepted in the climate science community results in the application of naïve falsification criteria. Appeal to naïve falsification in turn has allowed scientists outside the standards and conventions of the mainstream climate science community to consider themselves and to be judged by climate skeptics as valid critics of particular statistical reconstructions with naïve and misapplied methodological criticism. Examples will include the skeptical responses to multi-proxy mean temperature reconstructions and congressional hearings criticizing the work of Michael Mann et al.'s Hockey Stick.

  13. Statistical foundations of liquid-crystal theory: II: Macroscopic balance laws.

    PubMed

    Seguin, Brian; Fried, Eliot

    2013-01-01

    Working on a state space determined by considering a discrete system of rigid rods, we use nonequilibrium statistical mechanics to derive macroscopic balance laws for liquid crystals. A probability function that satisfies the Liouville equation serves as the starting point for deriving each macroscopic balance. The terms appearing in the derived balances are interpreted as expected values and explicit formulas for these terms are obtained. Among the list of derived balances appear two, the tensor moment of inertia balance and the mesofluctuation balance, that are not standard in previously proposed macroscopic theories for liquid crystals but which have precedents in other theories for structured media.

  14. Statistical foundations of liquid-crystal theory

    PubMed Central

    Seguin, Brian; Fried, Eliot

    2013-01-01

    Working on a state space determined by considering a discrete system of rigid rods, we use nonequilibrium statistical mechanics to derive macroscopic balance laws for liquid crystals. A probability function that satisfies the Liouville equation serves as the starting point for deriving each macroscopic balance. The terms appearing in the derived balances are interpreted as expected values and explicit formulas for these terms are obtained. Among the list of derived balances appear two, the tensor moment of inertia balance and the mesofluctuation balance, that are not standard in previously proposed macroscopic theories for liquid crystals but which have precedents in other theories for structured media. PMID:23554513

  15. Statistical analysis of water-quality data containing multiple detection limits: S-language software for regression on order statistics

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2005-01-01

    Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.

  16. Standardization in gully erosion studies: methodology and interpretation of magnitudes from a global review

    NASA Astrophysics Data System (ADS)

    Castillo, Carlos; Gomez, Jose Alfonso

    2016-04-01

    Standardization is the process of developing common conventions or proceedings to facilitate the communication, use, comparison and exchange of products or information among different parties. It has been an useful tool in different fields from industry to statistics due to technical, economic and social reasons. In science the need for standardization has been recognised in the definition of methods as well as in publication formats. With respect to gully erosion, a number of initiatives have been carried out to propose common methodologies, for instance, for gully delineation (Castillo et al., 2014) and geometrical measurements (Casalí et al., 2015). The main aims of this work are: 1) to examine previous proposals in gully erosion literature implying standardization processes; 2) to contribute with new approaches to improve the homogeneity of methodologies and presentation of results for a better communication among the gully erosion community. For this purpose, we evaluated the basic information provided on environmental factors, discussed the delineation and measurement procedures proposed in previous works and, finally, we analysed statistically the severity of degradation levels derived from different indicators at the world scale. As a result, we presented suggestions aiming to serve as guidance for survey design as well as for the interpretation of vulnerability levels and degradation rates for future gully erosion studies. References Casalí, J., Giménez, R., and Campo-Bescós, M. A.: Gully geometry: what are we measuring?, SOIL, 1, 509-513, doi:10.5194/soil-1-509-2015, 2015. Castillo C., Taguas E. V., Zarco-Tejada P., James M. R., and Gómez J. A. (2014), The normalized topographic method: an automated procedure for gully mapping using GIS, Earth Surf. Process. Landforms, 39, 2002-2015, doi: 10.1002/esp.3595

  17. Towards accurate modelling of galaxy clustering on small scales: testing the standard ΛCDM + halo model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-07-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter haloes. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the `accurate' regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard Λ cold dark matter (ΛCDM) + halo model against the clustering of Sloan Digital Sky Survey (SDSS) seventh data release (DR7) galaxies. Specifically, we use the projected correlation function, group multiplicity function, and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir haloes) matches the clustering of low-luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the `standard' halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  18. Exploring the practicing-connections hypothesis: using gesture to support coordination of ideas in understanding a complex statistical concept.

    PubMed

    Son, Ji Y; Ramos, Priscilla; DeWolf, Melissa; Loftus, William; Stigler, James W

    2018-01-01

    In this article, we begin to lay out a framework and approach for studying how students come to understand complex concepts in rich domains. Grounded in theories of embodied cognition, we advance the view that understanding of complex concepts requires students to practice, over time, the coordination of multiple concepts, and the connection of this system of concepts to situations in the world. Specifically, we explore the role that a teacher's gesture might play in supporting students' coordination of two concepts central to understanding in the domain of statistics: mean and standard deviation. In Study 1 we show that university students who have just taken a statistics course nevertheless have difficulty taking both mean and standard deviation into account when thinking about a statistical scenario. In Study 2 we show that presenting the same scenario with an accompanying gesture to represent variation significantly impacts students' interpretation of the scenario. Finally, in Study 3 we present evidence that instructional videos on the internet fail to leverage gesture as a means of facilitating understanding of complex concepts. Taken together, these studies illustrate an approach to translating current theories of cognition into principles that can guide instructional design.

  19. Statistical analysis of solid waste composition data: Arithmetic mean, standard deviation and correlation coefficients.

    PubMed

    Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2017-11-01

    Data for fractional solid waste composition provide relative magnitudes of individual waste fractions, the percentages of which always sum to 100, thereby connecting them intrinsically. Due to this sum constraint, waste composition data represent closed data, and their interpretation and analysis require statistical methods, other than classical statistics that are suitable only for non-constrained data such as absolute values. However, the closed characteristics of waste composition data are often ignored when analysed. The results of this study showed, for example, that unavoidable animal-derived food waste amounted to 2.21±3.12% with a confidence interval of (-4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson's correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A review of contemporary methods for the presentation of scientific uncertainty.

    PubMed

    Makinson, K A; Hamby, D M; Edwards, J A

    2012-12-01

    Graphic methods for displaying uncertainty are often the most concise and informative way to communicate abstract concepts. Presentation methods currently in use for the display and interpretation of scientific uncertainty are reviewed. Numerous subjective and objective uncertainty display methods are presented, including qualitative assessments, node and arrow diagrams, standard statistical methods, box-and-whisker plots,robustness and opportunity functions, contribution indexes, probability density functions, cumulative distribution functions, and graphical likelihood functions.

  1. Methods for trend analysis: Examples with problem/failure data

    NASA Technical Reports Server (NTRS)

    Church, Curtis K.

    1989-01-01

    Statistics are emphasized as an important role in quality control and reliability. Consequently, Trend Analysis Techniques recommended a variety of statistical methodologies that could be applied to time series data. The major goal of the working handbook, using data from the MSFC Problem Assessment System, is to illustrate some of the techniques in the NASA standard, some different techniques, and to notice patterns of data. Techniques for trend estimation used are: regression (exponential, power, reciprocal, straight line) and Kendall's rank correlation coefficient. The important details of a statistical strategy for estimating a trend component are covered in the examples. However, careful analysis and interpretation is necessary because of small samples and frequent zero problem reports in a given time period. Further investigations to deal with these issues are being conducted.

  2. Student's Conceptions in Statistical Graph's Interpretation

    ERIC Educational Resources Information Center

    Kukliansky, Ida

    2016-01-01

    Histograms, box plots and cumulative distribution graphs are popular graphic representations for statistical distributions. The main research question that this study focuses on is how college students deal with interpretation of these statistical graphs when translating graphical representations into analytical concepts in descriptive statistics.…

  3. Multi-scale structure and topological anomaly detection via a new network statistic: The onion decomposition.

    PubMed

    Hébert-Dufresne, Laurent; Grochow, Joshua A; Allard, Antoine

    2016-08-18

    We introduce a network statistic that measures structural properties at the micro-, meso-, and macroscopic scales, while still being easy to compute and interpretable at a glance. Our statistic, the onion spectrum, is based on the onion decomposition, which refines the k-core decomposition, a standard network fingerprinting method. The onion spectrum is exactly as easy to compute as the k-cores: It is based on the stages at which each vertex gets removed from a graph in the standard algorithm for computing the k-cores. Yet, the onion spectrum reveals much more information about a network, and at multiple scales; for example, it can be used to quantify node heterogeneity, degree correlations, centrality, and tree- or lattice-likeness. Furthermore, unlike the k-core decomposition, the combined degree-onion spectrum immediately gives a clear local picture of the network around each node which allows the detection of interesting subgraphs whose topological structure differs from the global network organization. This local description can also be leveraged to easily generate samples from the ensemble of networks with a given joint degree-onion distribution. We demonstrate the utility of the onion spectrum for understanding both static and dynamic properties on several standard graph models and on many real-world networks.

  4. DNA Commission of the International Society for Forensic Genetics: revised and extended guidelines for mitochondrial DNA typing.

    PubMed

    Parson, W; Gusmão, L; Hares, D R; Irwin, J A; Mayr, W R; Morling, N; Pokorak, E; Prinz, M; Salas, A; Schneider, P M; Parsons, T J

    2014-11-01

    The DNA Commission of the International Society of Forensic Genetics (ISFG) regularly publishes guidelines and recommendations concerning the application of DNA polymorphisms to the question of human identification. Previous recommendations published in 2000 addressed the analysis and interpretation of mitochondrial DNA (mtDNA) in forensic casework. While the foundations set forth in the earlier recommendations still apply, new approaches to the quality control, alignment and nomenclature of mitochondrial sequences, as well as the establishment of mtDNA reference population databases, have been developed. Here, we describe these developments and discuss their application to both mtDNA casework and mtDNA reference population databasing applications. While the generation of mtDNA for forensic casework has always been guided by specific standards, it is now well-established that data of the same quality are required for the mtDNA reference population data used to assess the statistical weight of the evidence. As a result, we introduce guidelines regarding sequence generation, as well as quality control measures based on the known worldwide mtDNA phylogeny, that can be applied to ensure the highest quality population data possible. For both casework and reference population databasing applications, the alignment and nomenclature of haplotypes is revised here and the phylogenetic alignment proffered as acceptable standard. In addition, the interpretation of heteroplasmy in the forensic context is updated, and the utility of alignment-free database searches for unbiased probability estimates is highlighted. Finally, we discuss statistical issues and define minimal standards for mtDNA database searches. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. The emergent Copenhagen interpretation of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Hollowood, Timothy J.

    2014-05-01

    We introduce a new and conceptually simple interpretation of quantum mechanics based on reduced density matrices of sub-systems from which the standard Copenhagen interpretation emerges as an effective description of macroscopically large systems. This interpretation describes a world in which definite measurement results are obtained with probabilities that reproduce the Born rule. Wave function collapse is seen to be a useful but fundamentally unnecessary piece of prudent book keeping which is only valid for macro-systems. The new interpretation lies in a class of modal interpretations in that it applies to quantum systems that interact with a much larger environment. However, we show that it does not suffer from the problems that have plagued similar modal interpretations like macroscopic superpositions and rapid flipping between macroscopically distinct states. We describe how the interpretation fits neatly together with fully quantum formulations of statistical mechanics and that a measurement process can be viewed as a process of ergodicity breaking analogous to a phase transition. The key feature of the new interpretation is that joint probabilities for the ergodic subsets of states of disjoint macro-systems only arise as emergent quantities. Finally we give an account of the EPR-Bohm thought experiment and show that the interpretation implies the violation of the Bell inequality characteristic of quantum mechanics but in a way that is rather novel. The final conclusion is that the Copenhagen interpretation gives a completely satisfactory phenomenology of macro-systems interacting with micro-systems.

  6. Enhancing the quality of thermographic diagnosis in medicine

    NASA Astrophysics Data System (ADS)

    Kuklitskaya, A. G.; Olefir, G. I.

    2005-12-01

    This paper discusses the possibilities of enhancing the quality of thermographic diagnosis in medicine by increasing the objectivity of the processes of recording, visualization, and interpretation of IR images (thermograms) of patients. A test program is proposed for the diagnosis of oncopathology of the mammary glands, involving standard conditions for recording thermograms, visualization of the IR image in several versions of the color palette and shades of grey, its interpretation in accordance with a rigorously specified algorithm that takes into account the temperature regime in the Zakharin-Head zone of the heart, and the drawing of a conclusion based on a statistical analysis of literature data and the results of a survey of more than 3000 patients of the Minsk City Clinical Oncological Dispensary.

  7. Advances in Statistical Methods for Substance Abuse Prevention Research

    PubMed Central

    MacKinnon, David P.; Lockwood, Chondra M.

    2010-01-01

    The paper describes advances in statistical methods for prevention research with a particular focus on substance abuse prevention. Standard analysis methods are extended to the typical research designs and characteristics of the data collected in prevention research. Prevention research often includes longitudinal measurement, clustering of data in units such as schools or clinics, missing data, and categorical as well as continuous outcome variables. Statistical methods to handle these features of prevention data are outlined. Developments in mediation, moderation, and implementation analysis allow for the extraction of more detailed information from a prevention study. Advancements in the interpretation of prevention research results include more widespread calculation of effect size and statistical power, the use of confidence intervals as well as hypothesis testing, detailed causal analysis of research findings, and meta-analysis. The increased availability of statistical software has contributed greatly to the use of new methods in prevention research. It is likely that the Internet will continue to stimulate the development and application of new methods. PMID:12940467

  8. Applied statistics in ecology: common pitfalls and simple solutions

    Treesearch

    E. Ashley Steel; Maureen C. Kennedy; Patrick G. Cunningham; John S. Stanovick

    2013-01-01

    The most common statistical pitfalls in ecological research are those associated with data exploration, the logic of sampling and design, and the interpretation of statistical results. Although one can find published errors in calculations, the majority of statistical pitfalls result from incorrect logic or interpretation despite correct numerical calculations. There...

  9. Statistical definition of relapse: case of family drug court.

    PubMed

    Alemi, Farrokh; Haack, Mary; Nemes, Susanna

    2004-06-01

    At any point in time, a patient's return to drug use can be seen either as a temporary event or as a return to persistent use. There is no formal standard for distinguishing persistent drug use from an occasional relapse. This lack of standardization persists although the consequences of either interpretation can be life altering. In a drug court or regulatory situation, for example, misinterpreting relapse as return to drug use could lead to incarceration, loss of child custody, or loss of employment. A clinician who mistakes a client's relapse for persistent drug use may fail to adjust treatment intensity to client's needs. An empirical and standardized method for distinguishing relapse from persistent drug use is needed. This paper provides a tool for clinicians and judges to distinguish relapse from persistent use based on statistical analyses of patterns of client's drug use. To accomplish this, a control chart is created for time-in-between relapses. This paper shows how a statistical limit can be calculated by examining either the client's history or other clients in the same program. If client's time-in-between relapse exceeds the statistical limit, then the client has returned to persistent use. Otherwise, the drug use is temporary. To illustrate the method, it is applied to data from three family drug courts. The approach allows the estimation of control limits based on the client's as well as the court's historical patterns. The approach also allows comparison of courts based on recovery rates.

  10. Advanced statistical energy analysis

    NASA Astrophysics Data System (ADS)

    Heron, K. H.

    1994-09-01

    A high-frequency theory (advanced statistical energy analysis (ASEA)) is developed which takes account of the mechanism of tunnelling and uses a ray theory approach to track the power flowing around a plate or a beam network and then uses statistical energy analysis (SEA) to take care of any residual power. ASEA divides the energy of each sub-system into energy that is freely available for transfer to other sub-systems and energy that is fixed within the sub-systems that are physically separate and can be interpreted as a series of mathematical models, the first of which is identical to standard SEA and subsequent higher order models are convergent on an accurate prediction. Using a structural assembly of six rods as an example, ASEA is shown to converge onto the exact results while SEA is shown to overpredict by up to 60 dB.

  11. Handheld echocardiography during hospitalization for acute myocardial infarction.

    PubMed

    Cullen, Michael W; Geske, Jeffrey B; Anavekar, Nandan S; Askew, J Wells; Lewis, Bradley R; Oh, Jae K

    2017-11-01

    Handheld echocardiography (HHE) is concordant with standard transthoracic echocardiography (TTE) in a variety of settings but has not been thoroughly compared to traditional TTE in patients with acute myocardial infarction (AMI). Completed by experienced operators, HHE provides accurate diagnostic capabilities compared with standard TTE in AMI patients. This study prospectively enrolled patients admitted to the coronary care unit with AMI. Experienced sonographers performed HHE with a V-scan. All patients underwent clinical TTE. Each HHE was interpreted by 2 experts blinded to standard TTE. Agreement was assessed with κ statistics and concordance correlation coefficients. Analysis included 82 patients (mean age, 66 years; 74% male). On standard TTE, mean left ventricular (LV) ejection fraction was 46%. Correlation coefficients between HHE and TTE were 0.75 (95% confidence interval: 0.66 to 0.82) for LV ejection fraction and 0.69 (95% confidence interval: 0.58 to 0.77) for wall motion score index. The κ statistics ranged from 0.47 to 0.56 for LV enlargement, 0.55 to 0.79 for mitral regurgitation, and 0.44 to 0.57 for inferior vena cava dilatation. The κ statistics were highest for the anterior (0.81) and septal (0.71) apex and lowest for the mid inferolateral (0.36) and basal inferoseptal (0.36) walls. In patients with AMI, HHE and standard TTE demonstrate good correlation for LV function and wall motion. Agreement was less robust for structural abnormalities and specific wall segments. In experienced hands, HHE can provide a focused assessment of LV function in patients hospitalized with AMI; however, HHE should not substitute for comprehensive TTE. © 2017 Wiley Periodicals, Inc.

  12. Standardized Reporting of the Eczema Area and Severity Index (EASI) and the Patient-Oriented Eczema Measure (POEM): A Recommendation by the Harmonising Outcome Measures for Eczema (HOME) Initiative.

    PubMed

    Grinich, E; Schmitt, J; Küster, D; Spuls, P I; Williams, H C; Chalmers, J R; Thomas, K S; Apfelbacher, C; Prinsen, C A C; Furue, M; Stuart, B; Carter, B; Simpson, E

    2018-05-10

    Several organizations from multiple fields of medicine are setting standards for clinical research including protocol development, 1 harmonization of outcome reporting, 2 statistical analysis, 3 quality assessment 4 and reporting of findings. 1 Clinical research standardization facilitates the interpretation and synthesis of data, increases the usability of trial results for guideline groups and shared decision-making, and reduces selective outcome reporting bias. The mission of the Harmonising Outcome Measures for Eczema (HOME) initiative is to establish an agreed-upon core set of outcomes to be measured and reported in all clinical trials of atopic dermatitis (AD). This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  13. Method Designed to Respect Molecular Heterogeneity Can Profoundly Correct Present Data Interpretations for Genome-Wide Expression Analysis

    PubMed Central

    Chen, Chih-Hao; Hsu, Chueh-Lin; Huang, Shih-Hao; Chen, Shih-Yuan; Hung, Yi-Lin; Chen, Hsiao-Rong; Wu, Yu-Chung

    2015-01-01

    Although genome-wide expression analysis has become a routine tool for gaining insight into molecular mechanisms, extraction of information remains a major challenge. It has been unclear why standard statistical methods, such as the t-test and ANOVA, often lead to low levels of reproducibility, how likely applying fold-change cutoffs to enhance reproducibility is to miss key signals, and how adversely using such methods has affected data interpretations. We broadly examined expression data to investigate the reproducibility problem and discovered that molecular heterogeneity, a biological property of genetically different samples, has been improperly handled by the statistical methods. Here we give a mathematical description of the discovery and report the development of a statistical method, named HTA, for better handling molecular heterogeneity. We broadly demonstrate the improved sensitivity and specificity of HTA over the conventional methods and show that using fold-change cutoffs has lost much information. We illustrate the especial usefulness of HTA for heterogeneous diseases, by applying it to existing data sets of schizophrenia, bipolar disorder and Parkinson’s disease, and show it can abundantly and reproducibly uncover disease signatures not previously detectable. Based on 156 biological data sets, we estimate that the methodological issue has affected over 96% of expression studies and that HTA can profoundly correct 86% of the affected data interpretations. The methodological advancement can better facilitate systems understandings of biological processes, render biological inferences that are more reliable than they have hitherto been and engender translational medical applications, such as identifying diagnostic biomarkers and drug prediction, which are more robust. PMID:25793610

  14. Adopting a Patient-Centered Approach to Primary Outcome Analysis of Acute Stroke Trials Using a Utility-Weighted Modified Rankin Scale.

    PubMed

    Chaisinanunkul, Napasri; Adeoye, Opeolu; Lewis, Roger J; Grotta, James C; Broderick, Joseph; Jovin, Tudor G; Nogueira, Raul G; Elm, Jordan J; Graves, Todd; Berry, Scott; Lees, Kennedy R; Barreto, Andrew D; Saver, Jeffrey L

    2015-08-01

    Although the modified Rankin Scale (mRS) is the most commonly used primary end point in acute stroke trials, its power is limited when analyzed in dichotomized fashion and its indication of effect size challenging to interpret when analyzed ordinally. Weighting the 7 Rankin levels by utilities may improve scale interpretability while preserving statistical power. A utility-weighted mRS (UW-mRS) was derived by averaging values from time-tradeoff (patient centered) and person-tradeoff (clinician centered) studies. The UW-mRS, standard ordinal mRS, and dichotomized mRS were applied to 11 trials or meta-analyses of acute stroke treatments, including lytic, endovascular reperfusion, blood pressure moderation, and hemicraniectomy interventions. Utility values were 1.0 for mRS level 0; 0.91 for mRS level 1; 0.76 for mRS level 2; 0.65 for mRS level 3; 0.33 for mRS level 4; 0 for mRS level 5; and 0 for mRS level 6. For trials with unidirectional treatment effects, the UW-mRS paralleled the ordinal mRS and outperformed dichotomous mRS analyses. Both the UW-mRS and the ordinal mRS were statistically significant in 6 of 8 unidirectional effect trials, whereas dichotomous analyses were statistically significant in 2 to 4 of 8. In bidirectional effect trials, both the UW-mRS and ordinal tests captured the divergent treatment effects by showing neutral results, whereas some dichotomized analyses showed positive results. Mean utility differences in trials with statistically significant positive results ranged from 0.026 to 0.249. A UW-mRS performs similar to the standard ordinal mRS in detecting treatment effects in actual stroke trials and ensures the quantitative outcome is a valid reflection of patient-centered benefits. © 2015 American Heart Association, Inc.

  15. Interpreting Meta-Analyses of Genome-Wide Association Studies

    PubMed Central

    Han, Buhm; Eskin, Eleazar

    2012-01-01

    Meta-analysis is an increasingly popular tool for combining multiple genome-wide association studies in a single analysis to identify associations with small effect sizes. The effect sizes between studies in a meta-analysis may differ and these differences, or heterogeneity, can be caused by many factors. If heterogeneity is observed in the results of a meta-analysis, interpreting the cause of heterogeneity is important because the correct interpretation can lead to a better understanding of the disease and a more effective design of a replication study. However, interpreting heterogeneous results is difficult. The standard approach of examining the association p-values of the studies does not effectively predict if the effect exists in each study. In this paper, we propose a framework facilitating the interpretation of the results of a meta-analysis. Our framework is based on a new statistic representing the posterior probability that the effect exists in each study, which is estimated utilizing cross-study information. Simulations and application to the real data show that our framework can effectively segregate the studies predicted to have an effect, the studies predicted to not have an effect, and the ambiguous studies that are underpowered. In addition to helping interpretation, the new framework also allows us to develop a new association testing procedure taking into account the existence of effect. PMID:22396665

  16. Quality Metrics Of Digitally Derived Imagery And Their Relation To Interpreter Performance

    NASA Astrophysics Data System (ADS)

    Burke, James J.; Snyder, Harry L.

    1981-12-01

    Two hundred-fifty transparencies, displaying a new digital database consisting of 25 degraded versions (5 blur levels x 5 noise levels) of each of 10 digitized, first-generation positive transparencies, were used in two experiments involving 15 trained military photo-interpreters. Each image is 86 mm square and represents 40962 8-bit pixels. In the "interpretation" experiment, each photo-interpreter (judge) spent approximately two days extracting Essential Elements of Information (EEI's) from one degraded version of each scene at a constant blur level (FWHM = 40, 84 or 322 μm). In the scaling experiment, each judge assigned a numerical value to each of the 250 images, according to its perceived position on a 10-point NATO-standardized scale (0 = useless through 9 = nearly perfect), to the nearest 0.1 unit. Eighty-eight of the 100 possible values were used by the judges, indicating that 62 categories are needed to scale these hardcopy images. The overall correlation between the scaling and interpretation results was 0.9. Though the main effect of blur was not significant (p = 0.146) in the interpretation experiment, that of noise was significant (p = 0.005), and all main factors (blur, noise, scene, order of battle) and most interactions were statistically significant in the scaling experiment.

  17. KOJAK Group Finder: Scalable Group Detection via Integrated Knowledge-Based and Statistical Reasoning

    DTIC Science & Technology

    2006-09-01

    STELLA and PowerLoomn. These modules comunicate with a knowledge basec using KIF and stan(lardl relational database systelnis using either standard...groups ontology as well as a rule that infers additional seed members based on joint participation in a terrorism event. EDB schema files are a special... terrorism links from the Ali Baba EDB. Our interpretation of such links is that they KOJAK Manual E-42 encode that two people committed an act of

  18. Set-free Markov state model building

    NASA Astrophysics Data System (ADS)

    Weber, Marcus; Fackeldey, Konstantin; Schütte, Christof

    2017-03-01

    Molecular dynamics (MD) simulations face challenging problems since the time scales of interest often are much longer than what is possible to simulate; and even if sufficiently long simulations are possible the complex nature of the resulting simulation data makes interpretation difficult. Markov State Models (MSMs) help to overcome these problems by making experimentally relevant time scales accessible via coarse grained representations that also allow for convenient interpretation. However, standard set-based MSMs exhibit some caveats limiting their approximation quality and statistical significance. One of the main caveats results from the fact that typical MD trajectories repeatedly re-cross the boundary between the sets used to build the MSM which causes statistical bias in estimating the transition probabilities between these sets. In this article, we present a set-free approach to MSM building utilizing smooth overlapping ansatz functions instead of sets and an adaptive refinement approach. This kind of meshless discretization helps to overcome the recrossing problem and yields an adaptive refinement procedure that allows us to improve the quality of the model while exploring state space and inserting new ansatz functions into the MSM.

  19. Statistical assessment of DNA extraction reagent lot variability in real-time quantitative PCR

    USGS Publications Warehouse

    Bushon, R.N.; Kephart, C.M.; Koltun, G.F.; Francy, D.S.; Schaefer, F. W.; Lindquist, H.D. Alan

    2010-01-01

    Aims: The aim of this study was to evaluate the variability in lots of a DNA extraction kit using real-time PCR assays for Bacillus anthracis, Francisella tularensis and Vibrio cholerae. Methods and Results: Replicate aliquots of three bacteria were processed in duplicate with three different lots of a commercial DNA extraction kit. This experiment was repeated in triplicate. Results showed that cycle threshold values were statistically different among the different lots. Conclusions: Differences in DNA extraction reagent lots were found to be a significant source of variability for qPCR results. Steps should be taken to ensure the quality and consistency of reagents. Minimally, we propose that standard curves should be constructed for each new lot of extraction reagents, so that lot-to-lot variation is accounted for in data interpretation. Significance and Impact of the Study: This study highlights the importance of evaluating variability in DNA extraction procedures, especially when different reagent lots are used. Consideration of this variability in data interpretation should be an integral part of studies investigating environmental samples with unknown concentrations of organisms. ?? 2010 The Society for Applied Microbiology.

  20. NASA standard: Trend analysis techniques

    NASA Technical Reports Server (NTRS)

    1988-01-01

    This Standard presents descriptive and analytical techniques for NASA trend analysis applications. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. Use of this Standard is not mandatory; however, it should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend Analysis is neither a precise term nor a circumscribed methodology, but rather connotes, generally, quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this Standard. The document presents the basic ideas needed for qualitative and quantitative assessment of trends, together with relevant examples. A list of references provides additional sources of information.

  1. Statistical analysis and interpretation of prenatal diagnostic imaging studies, Part 2: descriptive and inferential statistical methods.

    PubMed

    Tuuli, Methodius G; Odibo, Anthony O

    2011-08-01

    The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.

  2. SPSS macros to compare any two fitted values from a regression model.

    PubMed

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  3. Medical cost analysis: application to colorectal cancer data from the SEER Medicare database.

    PubMed

    Bang, Heejung

    2005-10-01

    Incompleteness is a key feature of most survival data. Numerous well established statistical methodologies and algorithms exist for analyzing life or failure time data. However, induced censorship invalidates the use of those standard analytic tools for some survival-type data such as medical costs. In this paper, some valid methods currently available for analyzing censored medical cost data are reviewed. Some cautionary findings under different assumptions are envisioned through application to medical costs from colorectal cancer patients. Cost analysis should be suitably planned and carefully interpreted under various meaningful scenarios even with judiciously selected statistical methods. This approach would be greatly helpful to policy makers who seek to prioritize health care expenditures and to assess the elements of resource use.

  4. Statistical and Epistemological Issues in the Evaluation of Treatment Efficacy of Pharmaceutical, Psychological, and Combination Treatments for Women's Sexual Desire Difficulties.

    PubMed

    Chivers, Meredith L; Basson, Rosemary; Brotto, Lori A; Graham, Cynthia A; Stephenson, Kyle R

    2017-04-03

    We were grateful to receive responses from Leonore Tiefer, Anita Clayton and Robert Pyke, and Richard Balon and Robert Segraves, to our commentary (Brotto et al., 2016 ) on Pyke and Clayton ( 2015 ). These commentaries raise a number of substantive statistical and epistemological issues relating to the evaluation of treatment efficacy in pharmaceutical, psychological, and combination treatments for sexual desire difficulties and caution researchers to remain mindful of sources of bias as we do the science. In what follows, we discuss each of these issues in turn in hopes of encouraging our field to adopt the highest possible standards when carrying out and interpreting treatment outcome research.

  5. Model averaging and muddled multimodel inferences.

    PubMed

    Cade, Brian S

    2015-09-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.

  6. Inverse statistical estimation via order statistics: a resolution of the ill-posed inverse problem of PERT scheduling

    NASA Astrophysics Data System (ADS)

    Pickard, William F.

    2004-10-01

    The classical PERT inverse statistics problem requires estimation of the mean, \\skew1\\bar{m} , and standard deviation, s, of a unimodal distribution given estimates of its mode, m, and of the smallest, a, and largest, b, values likely to be encountered. After placing the problem in historical perspective and showing that it is ill-posed because it is underdetermined, this paper offers an approach to resolve the ill-posedness: (a) by interpreting a and b modes of order statistic distributions; (b) by requiring also an estimate of the number of samples, N, considered in estimating the set {m, a, b}; and (c) by maximizing a suitable likelihood, having made the traditional assumption that the underlying distribution is beta. Exact formulae relating the four parameters of the beta distribution to {m, a, b, N} and the assumed likelihood function are then used to compute the four underlying parameters of the beta distribution; and from them, \\skew1\\bar{m} and s are computed using exact formulae.

  7. How can Steganography BE AN Interpretation of the Redundancy in Pre-Mrna Ribbon?

    NASA Astrophysics Data System (ADS)

    Regoli, Massimo

    2013-01-01

    In the past years we have developed a new symmetric encryption algorithm based on a new interpretation of the biological phenomenon of the presence of redundant sequences inside pre-mRNA (the introns apparently junk DNA) from a `science of information' point of view. For the first, we have shown the flow of the algorithm by creating a parallel between the various biological aspects of the phenomenon of redundancy and the corresponding agents in our encryption algorithm. Then we set a strict mathematical terminology identifying spaces and mathematical operators for the correct application and interpretation of the algorithm. Finally, last year, we proved that our algorithm has excellent statistics behavior being able to exceed the standard static tests. This year we will try to add a new operator (agent) that is capable of allowing the introduction of a mechanisms like a steganographic sub message (sub ribbon of mRNA) inside the original message (mRNA ribbon).

  8. Simulations for designing and interpreting intervention trials in infectious diseases.

    PubMed

    Halloran, M Elizabeth; Auranen, Kari; Baird, Sarah; Basta, Nicole E; Bellan, Steven E; Brookmeyer, Ron; Cooper, Ben S; DeGruttola, Victor; Hughes, James P; Lessler, Justin; Lofgren, Eric T; Longini, Ira M; Onnela, Jukka-Pekka; Özler, Berk; Seage, George R; Smith, Thomas A; Vespignani, Alessandro; Vynnycky, Emilia; Lipsitch, Marc

    2017-12-29

    Interventions in infectious diseases can have both direct effects on individuals who receive the intervention as well as indirect effects in the population. In addition, intervention combinations can have complex interactions at the population level, which are often difficult to adequately assess with standard study designs and analytical methods. Herein, we urge the adoption of a new paradigm for the design and interpretation of intervention trials in infectious diseases, particularly with regard to emerging infectious diseases, one that more accurately reflects the dynamics of the transmission process. In an increasingly complex world, simulations can explicitly represent transmission dynamics, which are critical for proper trial design and interpretation. Certain ethical aspects of a trial can also be quantified using simulations. Further, after a trial has been conducted, simulations can be used to explore the possible explanations for the observed effects. Much is to be gained through a multidisciplinary approach that builds collaborations among experts in infectious disease dynamics, epidemiology, statistical science, economics, simulation methods, and the conduct of clinical trials.

  9. How to take deontological concerns seriously in risk-cost-benefit analysis: a re-interpretation of the precautionary principle.

    PubMed

    John, S D

    2007-04-01

    In this paper the coherence of the precautionary principle as a guide to public health policy is considered. Two conditions that any account of the principle must meet are outlined, a condition of practicality and a condition of publicity. The principle is interpreted in terms of a tripartite division of the outcomes of action (good outcomes, normal bad outcomes and special bad outcomes). Such a division of outcomes can be justified on either "consequentialist" or "deontological" grounds. In the second half of the paper, it is argued that the precautionary principle is not necessarily opposed to risk-cost-benefit analysis, but, rather, should be interpreted as suggesting a lowering of our epistemic standards for assessing evidence that there is a link between some policy and "special bad" outcomes. This suggestion is defended against the claim that it mistakes the nature of statistical testing and against the charge that it is unscientific or antiscientific, and therefore irrational.

  10. A sub-ensemble theory of ideal quantum measurement processes

    NASA Astrophysics Data System (ADS)

    Allahverdyan, Armen E.; Balian, Roger; Nieuwenhuizen, Theo M.

    2017-01-01

    In order to elucidate the properties currently attributed to ideal measurements, one must explain how the concept of an individual event with a well-defined outcome may emerge from quantum theory which deals with statistical ensembles, and how different runs issued from the same initial state may end up with different final states. This so-called "measurement problem" is tackled with two guidelines. On the one hand, the dynamics of the macroscopic apparatus A coupled to the tested system S is described mathematically within a standard quantum formalism, where " q-probabilities" remain devoid of interpretation. On the other hand, interpretative principles, aimed to be minimal, are introduced to account for the expected features of ideal measurements. Most of the five principles stated here, which relate the quantum formalism to physical reality, are straightforward and refer to macroscopic variables. The process can be identified with a relaxation of S + A to thermodynamic equilibrium, not only for a large ensemble E of runs but even for its sub-ensembles. The different mechanisms of quantum statistical dynamics that ensure these types of relaxation are exhibited, and the required properties of the Hamiltonian of S + A are indicated. The additional theoretical information provided by the study of sub-ensembles remove Schrödinger's quantum ambiguity of the final density operator for E which hinders its direct interpretation, and bring out a commutative behaviour of the pointer observable at the final time. The latter property supports the introduction of a last interpretative principle, needed to switch from the statistical ensembles and sub-ensembles described by quantum theory to individual experimental events. It amounts to identify some formal " q-probabilities" with ordinary frequencies, but only those which refer to the final indications of the pointer. The desired properties of ideal measurements, in particular the uniqueness of the result for each individual run of the ensemble and von Neumann's reduction, are thereby recovered with economic interpretations. The status of Born's rule involving both A and S is re-evaluated, and contextuality of quantum measurements is made obvious.

  11. Critical Appraisal Skills Among Canadian Obstetrics and Gynaecology Residents: How Do They Fare?

    PubMed

    Bougie, Olga; Posner, Glenn; Black, Amanda Y

    2015-07-01

    Evidence-based medicine has become the standard of care in clinical practice. In this study, our objectives were to (1) determine the type of epidemiology and/or biostatistical training being given in Canadian obstetrics and gynaecology post-graduate programs, (2) determine obstetrics and gynaecology residents' level of confidence with critical appraisal, and (3) assess knowledge of fundamental biostatistical and epidemiological principles among Canadian obstetrics and gynaecology trainees. During a national standardized in-training examination, all Canadian obstetrics and gynaecology residents were invited to complete an anonymous cross-sectional survey to determine their levels of confidence with critical appraisal. Fifteen critical appraisal questions were integrated into the standardized examination to assess critical appraisal skills objectively. Primary outcomes were the residents' level of confidence interpreting biostatistical results and applying research findings to clinical practice, their desire for more biostatistics/epidemiological training in residency, and their performance on knowledge questions. A total of 301 of 355 residents completed the survey (response rate=84.8%). Most (76.7%) had little/no confidence interpreting research statistics. Confidence was significantly higher in those with increased seniority (OR=1.93), in those who had taken a previous epidemiology/statistics course (OR=2.65), and in those who had prior publications (OR=1.82). Many (68%) had little/no confidence applying research findings to clinical practice. Confidence increased significantly with increasing training year (P<0.001) and with formal epidemiology training during residency (OR=2.01). The mean score of the 355 residents on the knowledge assessment questions was 69.8%. Increasing seniority was associated with improved overall test performance (P=0.02). Poorer performance topics included analytical study method (9.9%), study design (36.9%), and sample size (42.0%). Most (84.4%) wanted more epidemiology teaching. Canadian obstetrics and gynaecology residents may have the biostatistical and epidemiological knowledge to interpret results published in the literature, but lack confidence applying these skills in clinical settings. Most residents want additional training in these areas, and residency programs should include training in formal curriculums to improve their confidence and prepare them for a lifelong practice of evidence-based medicine.

  12. The application of the statistical classifying models for signal evaluation of the gas sensors analyzing mold contamination of the building materials

    NASA Astrophysics Data System (ADS)

    Majerek, Dariusz; Guz, Łukasz; Suchorab, Zbigniew; Łagód, Grzegorz; Sobczuk, Henryk

    2017-07-01

    Mold that develops on moistened building barriers is a major cause of the Sick Building Syndrome (SBS). Fungal contamination is normally evaluated using standard biological methods which are time-consuming and require a lot of manual labor. Fungi emit Volatile Organic Compounds (VOC) that can be detected in the indoor air using several techniques of detection e.g. chromatography. VOCs can be also detected using gas sensors arrays. All array sensors generate particular voltage signals that ought to be analyzed using properly selected statistical methods of interpretation. This work is focused on the attempt to apply statistical classifying models in evaluation of signals from gas sensors matrix to analyze the air sampled from the headspace of various types of the building materials at different level of contamination but also clean reference materials.

  13. The use of analysis of variance procedures in biological studies

    USGS Publications Warehouse

    Williams, B.K.

    1987-01-01

    The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.

  14. The Statistical Interpretation of Classical Thermodynamic Heating and Expansion Processes

    ERIC Educational Resources Information Center

    Cartier, Stephen F.

    2011-01-01

    A statistical model has been developed and applied to interpret thermodynamic processes typically presented from the macroscopic, classical perspective. Through this model, students learn and apply the concepts of statistical mechanics, quantum mechanics, and classical thermodynamics in the analysis of the (i) constant volume heating, (ii)…

  15. Data Acquisition and Preprocessing in Studies on Humans: What Is Not Taught in Statistics Classes?

    PubMed

    Zhu, Yeyi; Hernandez, Ladia M; Mueller, Peter; Dong, Yongquan; Forman, Michele R

    2013-01-01

    The aim of this paper is to address issues in research that may be missing from statistics classes and important for (bio-)statistics students. In the context of a case study, we discuss data acquisition and preprocessing steps that fill the gap between research questions posed by subject matter scientists and statistical methodology for formal inference. Issues include participant recruitment, data collection training and standardization, variable coding, data review and verification, data cleaning and editing, and documentation. Despite the critical importance of these details in research, most of these issues are rarely discussed in an applied statistics program. One reason for the lack of more formal training is the difficulty in addressing the many challenges that can possibly arise in the course of a study in a systematic way. This article can help to bridge this gap between research questions and formal statistical inference by using an illustrative case study for a discussion. We hope that reading and discussing this paper and practicing data preprocessing exercises will sensitize statistics students to these important issues and achieve optimal conduct, quality control, analysis, and interpretation of a study.

  16. Combining natural background levels (NBLs) assessment with indicator kriging analysis to improve groundwater quality data interpretation and management.

    PubMed

    Ducci, Daniela; de Melo, M Teresa Condesso; Preziosi, Elisabetta; Sellerino, Mariangela; Parrone, Daniele; Ribeiro, Luis

    2016-11-01

    The natural background level (NBL) concept is revisited and combined with indicator kriging method to analyze the spatial distribution of groundwater quality within a groundwater body (GWB). The aim is to provide a methodology to easily identify areas with the same probability of exceeding a given threshold (which may be a groundwater quality criteria, standards, or recommended limits for selected properties and constituents). Three case studies with different hydrogeological settings and located in two countries (Portugal and Italy) are used to derive NBL using the preselection method and validate the proposed methodology illustrating its main advantages over conventional statistical water quality analysis. Indicator kriging analysis was used to create probability maps of the three potential groundwater contaminants. The results clearly indicate the areas within a groundwater body that are potentially contaminated because the concentrations exceed the drinking water standards or even the local NBL, and cannot be justified by geogenic origin. The combined methodology developed facilitates the management of groundwater quality because it allows for the spatial interpretation of NBL values. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Localized Smart-Interpretation

    NASA Astrophysics Data System (ADS)

    Lundh Gulbrandsen, Mats; Mejer Hansen, Thomas; Bach, Torben; Pallesen, Tom

    2014-05-01

    The complex task of setting up a geological model consists not only of combining available geological information into a conceptual plausible model, but also requires consistency with availably data, e.g. geophysical data. However, in many cases the direct geological information, e.g borehole samples, are very sparse, so in order to create a geological model, the geologist needs to rely on the geophysical data. The problem is however, that the amount of geophysical data in many cases are so vast that it is practically impossible to integrate all of them in the manual interpretation process. This means that a lot of the information available from the geophysical surveys are unexploited, which is a problem, due to the fact that the resulting geological model does not fulfill its full potential and hence are less trustworthy. We suggest an approach to geological modeling that 1. allow all geophysical data to be considered when building the geological model 2. is fast 3. allow quantification of geological modeling. The method is constructed to build a statistical model, f(d,m), describing the relation between what the geologists interpret, d, and what the geologist knows, m. The para- meter m reflects any available information that can be quantified, such as geophysical data, the result of a geophysical inversion, elevation maps, etc... The parameter d reflects an actual interpretation, such as for example the depth to the base of a ground water reservoir. First we infer a statistical model f(d,m), by examining sets of actual interpretations made by a geological expert, [d1, d2, ...], and the information used to perform the interpretation; [m1, m2, ...]. This makes it possible to quantify how the geological expert performs interpolation through f(d,m). As the geological expert proceeds interpreting, the number of interpreted datapoints from which the statistical model is inferred increases, and therefore the accuracy of the statistical model increases. When a model f(d,m) successfully has been inferred, we are able to simulate how the geological expert would perform an interpretation given some external information m, through f(d|m). We will demonstrate this method applied on geological interpretation and densely sampled airborne electromagnetic data. In short, our goal is to build a statistical model describing how a geological expert performs geological interpretation given some geophysical data. We then wish to use this statistical model to perform semi automatic interpretation, everywhere where such geophysical data exist, in a manner consistent with the choices made by a geological expert. Benefits of such a statistical model are that 1. it provides a quantification of how a geological expert performs interpretation based on available diverse data 2. all available geophysical information can be used 3. it allows much faster interpretation of large data sets.

  18. Teaching Business Statistics with Real Data to Undergraduates and the Use of Technology in the Class Room

    ERIC Educational Resources Information Center

    Singamsetti, Rao

    2007-01-01

    In this paper an attempt is made to highlight some issues of interpretation of statistical concepts and interpretation of results as taught in undergraduate Business statistics courses. The use of modern technology in the class room is shown to have increased the efficiency and the ease of learning and teaching in statistics. The importance of…

  19. Maximum entropy models of ecosystem functioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertram, Jason, E-mail: jason.bertram@anu.edu.au

    2014-12-05

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on themore » information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.« less

  20. A decision support system and rule-based algorithm to augment the human interpretation of the 12-lead electrocardiogram.

    PubMed

    Cairns, Andrew W; Bond, Raymond R; Finlay, Dewar D; Guldenring, Daniel; Badilini, Fabio; Libretti, Guido; Peace, Aaron J; Leslie, Stephen J

    The 12-lead Electrocardiogram (ECG) has been used to detect cardiac abnormalities in the same format for more than 70years. However, due to the complex nature of 12-lead ECG interpretation, there is a significant cognitive workload required from the interpreter. This complexity in ECG interpretation often leads to errors in diagnosis and subsequent treatment. We have previously reported on the development of an ECG interpretation support system designed to augment the human interpretation process. This computerised decision support system has been named 'Interactive Progressive based Interpretation' (IPI). In this study, a decision support algorithm was built into the IPI system to suggest potential diagnoses based on the interpreter's annotations of the 12-lead ECG. We hypothesise semi-automatic interpretation using a digital assistant can be an optimal man-machine model for ECG interpretation. To improve interpretation accuracy and reduce missed co-abnormalities. The Differential Diagnoses Algorithm (DDA) was developed using web technologies where diagnostic ECG criteria are defined in an open storage format, Javascript Object Notation (JSON), which is queried using a rule-based reasoning algorithm to suggest diagnoses. To test our hypothesis, a counterbalanced trial was designed where subjects interpreted ECGs using the conventional approach and using the IPI+DDA approach. A total of 375 interpretations were collected. The IPI+DDA approach was shown to improve diagnostic accuracy by 8.7% (although not statistically significant, p-value=0.1852), the IPI+DDA suggested the correct interpretation more often than the human interpreter in 7/10 cases (varying statistical significance). Human interpretation accuracy increased to 70% when seven suggestions were generated. Although results were not found to be statistically significant, we found; 1) our decision support tool increased the number of correct interpretations, 2) the DDA algorithm suggested the correct interpretation more often than humans, and 3) as many as 7 computerised diagnostic suggestions augmented human decision making in ECG interpretation. Statistical significance may be achieved by expanding sample size. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Methodological choices affect cancer incidence rates: a cohort study.

    PubMed

    Brooke, Hannah L; Talbäck, Mats; Feychting, Maria; Ljung, Rickard

    2017-01-19

    Incidence rates are fundamental to epidemiology, but their magnitude and interpretation depend on methodological choices. We aimed to examine the extent to which the definition of the study population affects cancer incidence rates. All primary cancer diagnoses in Sweden between 1958 and 2010 were identified from the national Cancer Register. Age-standardized and age-specific incidence rates of 29 cancer subtypes between 2000 and 2010 were calculated using four definitions of the study population: persons resident in Sweden 1) based on general population statistics; 2) with no previous subtype-specific cancer diagnosis; 3) with no previous cancer diagnosis except non-melanoma skin cancer; and 4) with no previous cancer diagnosis of any type. We calculated absolute and relative differences between methods. Age-standardized incidence rates calculated using general population statistics ranged from 6% lower (prostate cancer, incidence rate difference: -13.5/100,000 person-years) to 8% higher (breast cancer in women, incidence rate difference: 10.5/100,000 person-years) than incidence rates based on individuals with no previous subtype-specific cancer diagnosis. Age-standardized incidence rates in persons with no previous cancer of any type were up to 10% lower (bladder cancer in women) than rates in those with no previous subtype-specific cancer diagnosis; however, absolute differences were <5/100,000 person-years for all cancer subtypes. For some cancer subtypes incidence rates vary depending on the definition of the study population. For these subtypes, standardized incidence ratios calculated using general population statistics could be misleading. Moreover, etiological arguments should be used to inform methodological choices during study design.

  2. [Do we always correctly interpret the results of statistical nonparametric tests].

    PubMed

    Moczko, Jerzy A

    2014-01-01

    Mann-Whitney, Wilcoxon, Kruskal-Wallis and Friedman tests create a group of commonly used tests to analyze the results of clinical and laboratory data. These tests are considered to be extremely flexible and their asymptotic relative efficiency exceeds 95 percent. Compared with the corresponding parametric tests they do not require checking the fulfillment of the conditions such as the normality of data distribution, homogeneity of variance, the lack of correlation means and standard deviations, etc. They can be used both in the interval and or-dinal scales. The article presents an example Mann-Whitney test, that does not in any case the choice of these four nonparametric tests treated as a kind of gold standard leads to correct inference.

  3. Precision, Reliability, and Effect Size of Slope Variance in Latent Growth Curve Models: Implications for Statistical Power Analysis

    PubMed Central

    Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher

    2018-01-01

    Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377

  4. Combinatorial interpretation of Haldane-Wu fractional exclusion statistics.

    PubMed

    Aringazin, A K; Mazhitov, M I

    2002-08-01

    Assuming that the maximal allowed number of identical particles in a state is an integer parameter, q, we derive the statistical weight and analyze the associated equation that defines the statistical distribution. The derived distribution covers Fermi-Dirac and Bose-Einstein ones in the particular cases q=1 and q--> infinity (n(i)/q-->1), respectively. We show that the derived statistical weight provides a natural combinatorial interpretation of Haldane-Wu fractional exclusion statistics, and present exact solutions of the distribution equation.

  5. The effect of restructuring student writing in the general chemistry laboratory on student understanding of chemistry and on students' approach to the laboratory course

    NASA Astrophysics Data System (ADS)

    Rudd, James Andrew, II

    Many students encounter difficulties engaging with laboratory-based instruction, and reviews of research have indicated that the value of such instruction is not clearly evident. Traditional forms of writing associated with laboratory activities are commonly in a style used by professional scientists to communicate developed explanations. Students probably lack the interpretative skills of a professional, and writing in this style may not support students in learning how to develop scientific explanations. The Science Writing Heuristic (SWH) is an inquiry-based approach to laboratory instruction designed in part to promote student ability in developing such explanations. However, there is not a convincing body of evidence for the superiority of inquiry-based laboratory instruction in chemistry. In a series of studies, the performance of students using the SWH student template in place of the standard laboratory report format was compared to the performance of students using the standard format. The standard reports had Title, Purpose, Procedure, Data & Observations, Calculations & Graphs, and Discussion sections. The SWH reports had Beginning Questions & Ideas, Tests & Procedures, Observations, Claims, Evidence, and Reflection sections. The pilot study produced evidence that using the SWH improved the quality of laboratory reports, improved student performance on a laboratory exam, and improved student approach to laboratory work. A main study found that SWH students statistically exhibited a better understanding of physical equilibrium when written explanations and equations were analyzed on a lecture exam and performed descriptively better on a physical equilibrium practical exam task. In another main study, the activities covering the general equilibrium concept were restructured as an additional change, and it was found that SWH students exhibited a better understanding of chemical equilibrium as shown by statistically greater success in overcoming the common confusion of interpreting equilibrium as equal concentrations and by statistically better performance when explaining aspects of chemical equilibrium. Both main studies found that students and instructors spent less time on the SWH reports and that students preferred the SWH approach because it increased their level of mental engagement. The studies supported the conclusion that inquiry-based laboratory instruction benefits student learning and attitudes.

  6. Entanglement entropy of electromagnetic edge modes.

    PubMed

    Donnelly, William; Wall, Aron C

    2015-03-20

    The vacuum entanglement entropy of Maxwell theory, when evaluated by standard methods, contains an unexpected term with no known statistical interpretation. We resolve this two-decades old puzzle by showing that this term is the entanglement entropy of edge modes: classical solutions determined by the electric field normal to the entangling surface. We explain how the heat kernel regularization applied to this term leads to the negative divergent expression found by Kabat. This calculation also resolves a recent puzzle concerning the logarithmic divergences of gauge fields in 3+1 dimensions.

  7. 40 CFR Appendix K to Part 50 - Interpretation of the National Ambient Air Quality Standards for Particulate Matter

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 2 2014-07-01 2014-07-01 false Interpretation of the National Ambient Air Quality Standards for Particulate Matter K Appendix K to Part 50 Protection of Environment... STANDARDS Pt. 50, App. K Appendix K to Part 50—Interpretation of the National Ambient Air Quality Standards...

  8. 40 CFR Appendix K to Part 50 - Interpretation of the National Ambient Air Quality Standards for Particulate Matter

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 2 2013-07-01 2013-07-01 false Interpretation of the National Ambient Air Quality Standards for Particulate Matter K Appendix K to Part 50 Protection of Environment... STANDARDS Pt. 50, App. K Appendix K to Part 50—Interpretation of the National Ambient Air Quality Standards...

  9. 40 CFR Appendix K to Part 50 - Interpretation of the National Ambient Air Quality Standards for Particulate Matter

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 2 2012-07-01 2012-07-01 false Interpretation of the National Ambient Air Quality Standards for Particulate Matter K Appendix K to Part 50 Protection of Environment... STANDARDS Pt. 50, App. K Appendix K to Part 50—Interpretation of the National Ambient Air Quality Standards...

  10. 40 CFR Appendix K to Part 50 - Interpretation of the National Ambient Air Quality Standards for Particulate Matter

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 2 2011-07-01 2011-07-01 false Interpretation of the National Ambient Air Quality Standards for Particulate Matter K Appendix K to Part 50 Protection of Environment... STANDARDS Pt. 50, App. K Appendix K to Part 50—Interpretation of the National Ambient Air Quality Standards...

  11. Practical interpretation of CYP2D6 haplotypes: Comparison and integration of automated and expert calling.

    PubMed

    Ruaño, Gualberto; Kocherla, Mohan; Graydon, James S; Holford, Theodore R; Makowski, Gregory S; Goethe, John W

    2016-05-01

    We describe a population genetic approach to compare samples interpreted with expert calling (EC) versus automated calling (AC) for CYP2D6 haplotyping. The analysis represents 4812 haplotype calls based on signal data generated by the Luminex xMap analyzers from 2406 patients referred to a high-complexity molecular diagnostics laboratory for CYP450 testing. DNA was extracted from buccal swabs. We compared the results of expert calls (EC) and automated calls (AC) with regard to haplotype number and frequency. The ratio of EC to AC was 1:3. Haplotype frequencies from EC and AC samples were convergent across haplotypes, and their distribution was not statistically different between the groups. Most duplications required EC, as only expansions with homozygous or hemizygous haplotypes could be automatedly called. High-complexity laboratories can offer equivalent interpretation to automated calling for non-expanded CYP2D6 loci, and superior interpretation for duplications. We have validated scientific expert calling specified by scoring rules as standard operating procedure integrated with an automated calling algorithm. The integration of EC with AC is a practical strategy for CYP2D6 clinical haplotyping. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. The Birth-Death-Mutation Process: A New Paradigm for Fat Tailed Distributions

    PubMed Central

    Maruvka, Yosef E.; Kessler, David A.; Shnerb, Nadav M.

    2011-01-01

    Fat tailed statistics and power-laws are ubiquitous in many complex systems. Usually the appearance of of a few anomalously successful individuals (bio-species, investors, websites) is interpreted as reflecting some inherent “quality” (fitness, talent, giftedness) as in Darwin's theory of natural selection. Here we adopt the opposite, “neutral”, outlook, suggesting that the main factor explaining success is merely luck. The statistics emerging from the neutral birth-death-mutation (BDM) process is shown to fit marvelously many empirical distributions. While previous neutral theories have focused on the power-law tail, our theory economically and accurately explains the entire distribution. We thus suggest the BDM distribution as a standard neutral model: effects of fitness and selection are to be identified by substantial deviations from it. PMID:22069453

  13. An ROC-type measure of diagnostic accuracy when the gold standard is continuous-scale.

    PubMed

    Obuchowski, Nancy A

    2006-02-15

    ROC curves and summary measures of accuracy derived from them, such as the area under the ROC curve, have become the standard for describing and comparing the accuracy of diagnostic tests. Methods for estimating ROC curves rely on the existence of a gold standard which dichotomizes patients into disease present or absent. There are, however, many examples of diagnostic tests whose gold standards are not binary-scale, but rather continuous-scale. Unnatural dichotomization of these gold standards leads to bias and inconsistency in estimates of diagnostic accuracy. In this paper, we propose a non-parametric estimator of diagnostic test accuracy which does not require dichotomization of the gold standard. This estimator has an interpretation analogous to the area under the ROC curve. We propose a confidence interval for test accuracy and a statistical test for comparing accuracies of tests from paired designs. We compare the performance (i.e. CI coverage, type I error rate, power) of the proposed methods with several alternatives. An example is presented where the accuracies of two quick blood tests for measuring serum iron concentrations are estimated and compared.

  14. Ergodic Theory, Interpretations of Probability and the Foundations of Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    van Lith, Janneke

    The traditional use of ergodic theory in the foundations of equilibrium statistical mechanics is that it provides a link between thermodynamic observables and microcanonical probabilities. First of all, the ergodic theorem demonstrates the equality of microcanonical phase averages and infinite time averages (albeit for a special class of systems, and up to a measure zero set of exceptions). Secondly, one argues that actual measurements of thermodynamic quantities yield time averaged quantities, since measurements take a long time. The combination of these two points is held to be an explanation why calculating microcanonical phase averages is a successful algorithm for predicting the values of thermodynamic observables. It is also well known that this account is problematic. This survey intends to show that ergodic theory nevertheless may have important roles to play, and it explores three other uses of ergodic theory. Particular attention is paid, firstly, to the relevance of specific interpretations of probability, and secondly, to the way in which the concern with systems in thermal equilibrium is translated into probabilistic language. With respect to the latter point, it is argued that equilibrium should not be represented as a stationary probability distribution as is standardly done; instead, a weaker definition is presented.

  15. For a statistical interpretation of Helmholtz' thermal displacement

    NASA Astrophysics Data System (ADS)

    Podio-Guidugli, Paolo

    2016-11-01

    On moving from the classic papers by Einstein and Langevin on Brownian motion, two consistent statistical interpretations are given for the thermal displacement, a scalar field formally introduced by Helmholtz, whose time derivative is by definition the absolute temperature.

  16. Evaluation of forensic DNA mixture evidence: protocol for evaluation, interpretation, and statistical calculations using the combined probability of inclusion.

    PubMed

    Bieber, Frederick R; Buckleton, John S; Budowle, Bruce; Butler, John M; Coble, Michael D

    2016-08-31

    The evaluation and interpretation of forensic DNA mixture evidence faces greater interpretational challenges due to increasingly complex mixture evidence. Such challenges include: casework involving low quantity or degraded evidence leading to allele and locus dropout; allele sharing of contributors leading to allele stacking; and differentiation of PCR stutter artifacts from true alleles. There is variation in statistical approaches used to evaluate the strength of the evidence when inclusion of a specific known individual(s) is determined, and the approaches used must be supportable. There are concerns that methods utilized for interpretation of complex forensic DNA mixtures may not be implemented properly in some casework. Similar questions are being raised in a number of U.S. jurisdictions, leading to some confusion about mixture interpretation for current and previous casework. Key elements necessary for the interpretation and statistical evaluation of forensic DNA mixtures are described. Given the most common method for statistical evaluation of DNA mixtures in many parts of the world, including the USA, is the Combined Probability of Inclusion/Exclusion (CPI/CPE). Exposition and elucidation of this method and a protocol for use is the focus of this article. Formulae and other supporting materials are provided. Guidance and details of a DNA mixture interpretation protocol is provided for application of the CPI/CPE method in the analysis of more complex forensic DNA mixtures. This description, in turn, should help reduce the variability of interpretation with application of this methodology and thereby improve the quality of DNA mixture interpretation throughout the forensic community.

  17. Digital radiographic imaging transfer: comparison with plain radiographs.

    PubMed

    Averch, T D; O'Sullivan, D; Breitenbach, C; Beser, N; Schulam, P G; Moore, R G; Kavoussi, L R

    1997-04-01

    Advances in digital imaging and computer display technology have allowed development of clinical teleradiographic systems. There are limited data assessing the effectiveness of such systems when applied to urologic pathology. In an effort to appraise the effectiveness of teleradiology in identifying renal calculi, the accuracy of findings on transmitted radiographic images were compared with those made when viewing the actual plain film. Plain films (KUB) were obtained from 26 patients who presented to the radiology department to rule out urinary calculous disease. The films were digitalized by a radiograph scanner into ARCNEMA-2 file format, compressed by a NASA algorithm, and transferred via a 28.8-kbps modern over standard telephone lines to a remote section 25 miles away, where they were decompressed and viewed on a 1600 x 1200-pixel monitor. Two attending urologists and two endourologic fellows were randomized to read either the transmitted image or the original radiograph with minimal clinical history provided. Of the 26 plain radiographic films, 24 were correctly interpreted by the fellows and 25 by the attending physicians (92% and 96% accuracy, respectively) for a total accuracy of 94% with no statistical difference (p = 0.16). After compression, all but one of the digital images were transferred successfully. The attending physicians correctly interpreted 24 of the 25 digital images (96%), whereas the fellows were correct on 21 interpretations (84%), resulting in a total 90% accuracy with a significant difference between the groups (p < or = 0.04). Overall, no statistical difference between the interpretations of the plain film and the digital image was revealed (p = 0.21). Using available technology, KUB images can be transmitted to a remote site, and the location of a stone can be determined correctly. Higher accuracy is demonstrated by experienced surgeons.

  18. A statistical model for interpreting computerized dynamic posturography data

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.

    2002-01-01

    Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.

  19. On the use of statistical methods to interpret electrical resistivity data from the Eumsung basin (Cretaceous), Korea

    NASA Astrophysics Data System (ADS)

    Kim, Ji-Soo; Han, Soo-Hyung; Ryang, Woo-Hun

    2001-12-01

    Electrical resistivity mapping was conducted to delineate boundaries and architecture of the Eumsung Basin Cretaceous. Basin boundaries are effectively clarified in electrical dipole-dipole resistivity sections as high-resistivity contrast bands. High resistivities most likely originate from the basement of Jurassic granite and Precambrian gneiss, contrasting with the lower resistivities from infilled sedimentary rocks. The electrical properties of basin-margin boundaries are compatible with the results of vertical electrical soundings and very-low-frequency electromagnetic surveys. A statistical analysis of the resistivity sections is tested in terms of standard deviation and is found to be an effective scheme for the subsurface reconstruction of basin architecture as well as the surface demarcation of basin-margin faults and brittle fracture zones, characterized by much higher standard deviation. Pseudo three-dimensional architecture of the basin is delineated by integrating the composite resistivity structure information from two cross-basin E-W magnetotelluric lines and dipole-dipole resistivity lines. Based on statistical analysis, the maximum depth of the basin varies from about 1 km in the northern part to 3 km or more in the middle part. This strong variation supports the view that the basin experienced pull-apart opening with rapid subsidence of the central blocks and asymmetric cross-basinal extension.

  20. [Comprehension of hazard pictograms of chemical products among cleaning workers].

    PubMed

    Martí Fernández, Francesc; van der Haar, Rudolf; López López, Juan Carlos; Portell, Mariona; Torner Solé, Anna

    2015-01-01

    To assess the comprehension among cleaning workers of the hazard pictograms as defined by the Globally Harmonized System (GHS) of the United Nations, concerning the classification, labeling and packaging of substances and mixtures. A sample of 118 workers was surveyed on their perception of the GHS hazard pictograms. Comprehensibility was measured by the percentage of correct answers and the degree to which they reflected International Organization for Standardization and American National Standards Institute standards for minimum level of comprehension. The influence of different variables to predict comprehension capacity was assessed using a logistic regression model. Three groups of pictograms could be distinguished which were statistically differentiated by their comprehensibility. Pictograms reflecting "acute toxicity" and "flammable", were described correctly by 94% and 95% of the surveyed population, respectively. For pictograms reflecting "systemic toxicity", "corrosive", "warning", "environment" and "explosive" the frequency of correct answers ranged from 48% to 64%, whereas those for pictograms "oxidizing" and "compressed gas" were interpreted correctly by only 7% of respondents. Prognostic factors for poor comprehension included: not being familiar with the pictograms, not having received training on safe use of chemical products, being an immigrant and being 54 years of age or older. Only two pictograms exceeded minimum standards for comprehension. Training, a tool proven to be effective to improve the correct interpretation of danger symbols, should be encouraged, especially in those groups with greater comprehension difficulties. Copyright belongs to the Societat Catalana de Salut Laboral.

  1. The effect of using graphic organizers in the teaching of standard biology

    NASA Astrophysics Data System (ADS)

    Pepper, Wade Louis, Jr.

    This study was conducted to determine if the use of graphic organizers in the teaching of standard biology would increase student achievement, involvement and quality of activities. The subjects were 10th grade standard biology students in a large southern inner city high school. The study was conducted over a six-week period in an instructional setting using action research as the investigative format. After calculation of the homogeneity between classes, random selection was used to determine the graphic organizer class and the control class. The graphic organizer class was taught unit material through a variety of instructional methods along with the use of teacher generated graphic organizers. The control class was taught the same unit material using the same instructional methods, but without the use of graphic organizers. Data for the study were gathered from in-class written assignments, teacher-generated tests and text-generated tests, and rubric scores of an out-of-class written assignment and project. Also, data were gathered from student reactions, comments, observations and a teacher's research journal. Results were analyzed using descriptive statistics and qualitative interpretation. By comparing statistical results, it was determined that the use of graphic organizers did not make a statistically significant difference in the understanding of biological concepts and retention of factual information. Furthermore, the use of graphic organizers did not make a significant difference in motivating students to fulfill all class assignments with quality efforts and products. However, based upon student reactions and comments along with observations by the researcher, graphic organizers were viewed by the students as a favorable and helpful instructional tool. In lieu of statistical results, student gains from instructional activities using graphic organizers were positive and merit the continuation of their use as an instructional tool.

  2. [Statistical approach to evaluate the occurrence of out-of acceptable ranges and accuracy for antimicrobial susceptibility tests in inter-laboratory quality control program].

    PubMed

    Ueno, Tamio; Matuda, Junichi; Yamane, Nobuhisa

    2013-03-01

    To evaluate the occurrence of out-of acceptable ranges and accuracy of antimicrobial susceptibility tests, we applied a new statistical tool to the Inter-Laboratory Quality Control Program established by the Kyushu Quality Control Research Group. First, we defined acceptable ranges of minimum inhibitory concentration (MIC) for broth microdilution tests and inhibitory zone diameter for disk diffusion tests on the basis of Clinical and Laboratory Standards Institute (CLSI) M100-S21. In the analysis, more than two out-of acceptable range results in the 20 tests were considered as not allowable according to the CLSI document. Of the 90 participating laboratories, 46 (51%) experienced one or more occurrences of out-of acceptable range results. Then, a binomial test was applied to each participating laboratory. The results indicated that the occurrences of out-of acceptable range results in the 11 laboratories were significantly higher when compared to the CLSI recommendation (allowable rate < or = 0.05). The standard deviation indices(SDI) were calculated by using reported results, mean and standard deviation values for the respective antimicrobial agents tested. In the evaluation of accuracy, mean value from each laboratory was statistically compared with zero using a Student's t-test. The results revealed that 5 of the 11 above laboratories reported erroneous test results that systematically drifted to the side of resistance. In conclusion, our statistical approach has enabled us to detect significantly higher occurrences and source of interpretive errors in antimicrobial susceptibility tests; therefore, this approach can provide us with additional information that can improve the accuracy of the test results in clinical microbiology laboratories.

  3. Marketing of Personalized Cancer Care on the Web: An Analysis of Internet Websites

    PubMed Central

    Cronin, Angel; Bair, Elizabeth; Lindeman, Neal; Viswanath, Vish; Janeway, Katherine A.

    2015-01-01

    Internet marketing may accelerate the use of care based on genomic or tumor-derived data. However, online marketing may be detrimental if it endorses products of unproven benefit. We conducted an analysis of Internet websites to identify personalized cancer medicine (PCM) products and claims. A Delphi Panel categorized PCM as standard or nonstandard based on evidence of clinical utility. Fifty-five websites, sponsored by commercial entities, academic institutions, physicians, research institutes, and organizations, that marketed PCM included somatic (58%) and germline (20%) analysis, interpretive services (15%), and physicians/institutions offering personalized care (44%). Of 32 sites offering somatic analysis, 56% included specific test information (range 1–152 tests). All statistical tests were two-sided, and comparisons of website content were conducted using McNemar’s test. More websites contained information about the benefits than limitations of PCM (85% vs 27%, P < .001). Websites specifying somatic analysis were statistically significantly more likely to market one or more nonstandard tests as compared with standard tests (88% vs 44%, P = .04). PMID:25745021

  4. Methodology to assess clinical liver safety data.

    PubMed

    Merz, Michael; Lee, Kwan R; Kullak-Ublick, Gerd A; Brueckner, Andreas; Watkins, Paul B

    2014-11-01

    Analysis of liver safety data has to be multivariate by nature and needs to take into account time dependency of observations. Current standard tools for liver safety assessment such as summary tables, individual data listings, and narratives address these requirements to a limited extent only. Using graphics in the context of a systematic workflow including predefined graph templates is a valuable addition to standard instruments, helping to ensure completeness of evaluation, and supporting both hypothesis generation and testing. Employing graphical workflows interactively allows analysis in a team-based setting and facilitates identification of the most suitable graphics for publishing and regulatory reporting. Another important tool is statistical outlier detection, accounting for the fact that for assessment of Drug-Induced Liver Injury, identification and thorough evaluation of extreme values has much more relevance than measures of central tendency in the data. Taken together, systematical graphical data exploration and statistical outlier detection may have the potential to significantly improve assessment and interpretation of clinical liver safety data. A workshop was convened to discuss best practices for the assessment of drug-induced liver injury (DILI) in clinical trials.

  5. Interpretation of IEEE-854 floating-point standard and definition in the HOL system

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.

    1995-01-01

    The ANSI/IEEE Standard 854-1987 for floating-point arithmetic is interpreted by converting the lexical descriptions in the standard into mathematical conditional descriptions organized in tables. The standard is represented in higher-order logic within the framework of the HOL (Higher Order Logic) system. The paper is divided in two parts with the first part the interpretation and the second part the description in HOL.

  6. Analysis of biochemical genetic data on Jewish populations: II. Results and interpretations of heterogeneity indices and distance measures with respect to standards.

    PubMed

    Karlin, S; Kenett, R; Bonné-Tamir, B

    1979-05-01

    A nonparametric statistical methodology is used for the analysis of biochemical frequency data observed on a series of nine Jewish and six non-Jewish populations. Two categories of statistics are used: heterogeneity indices and various distance measures with respect to a standard. The latter are more discriminating in exploiting historical, geographical and culturally relevant information. A number of partial orderings and distance relationships among the populations are determined. Our concern in this study is to analyze similarities and differences among the Jewish populations, in terms of the gene frequency distributions for a number of genetic markers. Typical questions discussed are as follows: These Jewish populations differ in certain morphological and anthropometric traits. Are there corresponding differences in biochemical genetic constitution? How can we assess the extent of heterogeneity between and within groupings? Which class of markers (blood typings or protein loci) discriminates better among the separate populations? The results are quite surprising. For example, we found the Ashkenazi, Sephardi and Iraqi Jewish populations to be consistently close in genetic constitution and distant from all the other populations, namely the Yemenite and Cochin Jews, the Arabs, and the non-Jewish German and Russian populations. We found the Polish Jewish community the most heterogeneous among all Jewish populations. The blood loci discriminate better than the protein loci. A number of possible interpretations and hypotheses for these and other results are offered. The method devised for this analysis should prove useful in studying similarities and differences for other groups of populations for which substantial biochemical polymorphic data are available.

  7. Stochastic sampling effects in STR typing: Implications for analysis and interpretation.

    PubMed

    Timken, Mark D; Klein, Sonja B; Buoncristiani, Martin R

    2014-07-01

    The analysis and interpretation of forensic STR typing results can become more complicated when reduced template amounts are used for PCR amplification due to increased stochastic effects. These effects are typically observed as reduced heterozygous peak-height balance and increased frequency of undetected alleles (allelic "dropout"). To investigate the origins of these effects, a study was performed using the AmpFlSTR(®) Identifiler Plus(®) and MiniFiler(®) kits to amplify replicates from a dilution series of NIST Human DNA Quantitation Standard (SRM(®) 2372A). The resulting amplicons were resolved and detected on two different genetic analyzer platforms, the Applied Biosystems 3130xL and 3500 analyzers. Results from our study show that the four different STR/genetic analyzer combinations exhibited very similar peak-height ratio statistics when normalized for the amount of template DNA in the PCR. Peak-height ratio statistics were successfully modeled using the Poisson distribution to simulate pre-PCR stochastic sampling of the alleles, confirming earlier explanations that sampling is the primary source for peak-height imbalance in reduced template dilutions. In addition, template-based pre-PCR sampling simulations also successfully predicted allelic dropout frequencies, as modeled by logistic regression methods, for the low-template DNA dilutions. We discuss the possibility that an accurately quantified DNA template might be used to characterize the linear signal response for data collected using different STR kits or genetic analyzer platforms, so as to provide a standardized approach for comparing results obtained from different STR/CE combinations and to aid in validation studies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. "Describing our whole experience": the statistical philosophies of W. F. R. Weldon and Karl Pearson.

    PubMed

    Pence, Charles H

    2011-12-01

    There are two motivations commonly ascribed to historical actors for taking up statistics: to reduce complicated data to a mean value (e.g., Quetelet), and to take account of diversity (e.g., Galton). Different motivations will, it is assumed, lead to different methodological decisions in the practice of the statistical sciences. Karl Pearson and W. F. R. Weldon are generally seen as following directly in Galton's footsteps. I argue for two related theses in light of this standard interpretation, based on a reading of several sources in which Weldon, independently of Pearson, reflects on his own motivations. First, while Pearson does approach statistics from this "Galtonian" perspective, he is, consistent with his positivist philosophy of science, utilizing statistics to simplify the highly variable data of biology. Weldon, on the other hand, is brought to statistics by a rich empiricism and a desire to preserve the diversity of biological data. Secondly, we have here a counterexample to the claim that divergence in motivation will lead to a corresponding separation in methodology. Pearson and Weldon, despite embracing biometry for different reasons, settled on precisely the same set of statistical tools for the investigation of evolution. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. 75 FR 22565 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-29

    ... collect, assemble, interpret, analyze, report and publish surveys; research, study, statistical and... commercial entities, for surveys or research, where such releases are consistent with the mission of the...): To collect, assemble, interpret, analyze, report and publish surveys; research, study, statistical...

  10. An Interpretative Phenomenological Analysis of the Common Core Standards Program in the State of South Dakota

    ERIC Educational Resources Information Center

    Alase, Abayomi

    2017-01-01

    This interpretative phenomenological analysis (IPA) study investigated and interpreted the Common Core State Standards program (the phenomenon) that has been the dominating topic of discussions amongst educators all across the country since the inauguration of the program in 2014/2015 school session. Common Core State Standards (CCSS) was a…

  11. Interpretation guidelines of a standard Y-chromosome STR 17-plex PCR-CE assay for crime casework.

    PubMed

    Roewer, Lutz; Geppert, Maria

    2012-01-01

    Y-STR analysis is an invaluable tool to examine evidence in sexual assault cases and in other forensic casework. Unambiguous detection of the male component in DNA mixtures with a high female background is still the main field of application of forensic Y-STR haplotyping. In the last years, powerful technologies including a 17-locus multiplex PCR assay have been introduced in the forensic laboratories. At the same time, statistical methods have been developed and adapted for interpretation of a nonrecombining, linear marker as the Y-chromosome which shows a strongly clustered geographical distribution due to the linear inheritance and the patrilocality of ancestral groups. Large population databases, namely the Y-STR Haplotype Reference Database (YHRD), have been established to assess the evidentiary value of Y-STR matches by means of frequency estimation methods (counting and extrapolation).

  12. Search for neutral MSSM Higgs bosons at LEP

    NASA Astrophysics Data System (ADS)

    Schael, S.; Barate, R.; Brunelière, R.; de Bonis, I.; Decamp, D.; Goy, C.; Jézéquel, S.; Lees, J.-P.; Martin, F.; Merle, E.; Minard, M.-N.; Pietrzyk, B.; Trocmé, B.; Bravo, S.; Casado, M. P.; Chmeissani, M.; Crespo, J. M.; Fernandez, E.; Fernandez-Bosman, M.; Garrido, L.; Martinez, M.; Pacheco, A.; Ruiz, H.; Colaleo, A.; Creanza, D.; de Filippis, N.; de Palma, M.; Iaselli, G.; Maggi, G.; Maggi, M.; Nuzzo, S.; Ranieri, A.; Raso, G.; Ruggieri, F.; Selvaggi, G.; Silvestris, L.; Tempesta, P.; Tricomi, A.; Zito, G.; Huang, X.; Lin, J.; Ouyang, Q.; Wang, T.; Xie, Y.; Xu, R.; Xue, S.; Zhang, J.; Zhang, L.; Zhao, W.; Abbaneo, D.; Barklow, T.; Buchmüller, O.; Cattaneo, M.; Clerbaux, B.; Drevermann, H.; Forty, R. W.; Frank, M.; Gianotti, F.; Hansen, J. B.; Harvey, J.; Hutchcroft, D. E.; Janot, P.; Jost, B.; Kado, M.; Mato, P.; Moutoussi, A.; Ranjard, F.; Rolandi, L.; Schlatter, D.; Teubert, F.; Valassi, A.; Videau, I.; Badaud, F.; Dessagne, S.; Falvard, A.; Fayolle, D.; Gay, P.; Jousset, J.; Michel, B.; Monteil, S.; Pallin, D.; Pascolo, J. M.; Perret, P.; Hansen, J. D.; Hansen, J. R.; Hansen, P. H.; Kraan, A. C.; Nilsson, B. S.; Kyriakis, A.; Markou, C.; Simopoulou, E.; Vayaki, A.; Zachariadou, K.; Blondel, A.; Brient, J.-C.; Machefert, F.; Rougé, A.; Videau, H.; Ciulli, V.; Focardi, E.; Parrini, G.; Antonelli, A.; Antonelli, M.; Bencivenni, G.; Bossi, F.; Capon, G.; Cerutti, F.; Chiarella, V.; Mannocchi, G.; Laurelli, P.; Mannocchi, G.; Murtas, G. P.; Passalacqua, L.; Kennedy, J.; Lynch, J. G.; Negus, P.; O'Shea, V.; Thompson, A. S.; Wasserbaech, S.; Cavanaugh, R.; Dhamotharan, S.; Geweniger, C.; Hanke, P.; Hepp, V.; Kluge, E. E.; Putzer, A.; Stenzel, H.; Tittel, K.; Wunsch, M.; Beuselinck, R.; Cameron, W.; Davies, G.; Dornan, P. J.; Girone, M.; Marinelli, N.; Nowell, J.; Rutherford, S. A.; Sedgbeer, J. K.; Thompson, J. C.; White, R.; Ghete, V. M.; Girtler, P.; Kneringer, E.; Kuhn, D.; Rudolph, G.; Bouhova-Thacker, E.; Bowdery, C. K.; Clarke, D. P.; Ellis, G.; Finch, A. J.; Foster, F.; Hughes, G.; Jones, R. W. L.; Pearson, M. R.; Robertson, N. A.; Smizanska, M.; van der Aa, O.; Delaere, C.; Leibenguth, G.; Lemaitre, V.; Blumenschein, U.; Hölldorfer, F.; Jakobs, K.; Kayser, F.; Müller, A.-S.; Renk, B.; Sander, H.-G.; Schmeling, S.; Wachsmuth, H.; Zeitnitz, C.; Ziegler, T.; Bonissent, A.; Coyle, P.; Curtil, C.; Ealet, A.; Fouchez, D.; Payre, P.; Tilquin, A.; Ragusa, F.; David, A.; Dietl, H.; Ganis, G.; Hüttmann, K.; Lütjens, G.; Männer, W.; Moser, H.-G.; Settles, R.; Villegas, M.; Wolf, G.; Boucrot, J.; Callot, O.; Davier, M.; Duflot, L.; Grivaz, J.-F.; Heusse, P.; Jacholkowska, A.; Serin, L.; Veillet, J.-J.; Azzurri, P.; Bagliesi, G.; Boccali, T.; Foà, L.; Giammanco, A.; Giassi, A.; Ligabue, F.; Messineo, A.; Palla, F.; Sanguinetti, G.; Sciabà, A.; Sguazzoni, G.; Spagnolo, P.; Tenchini, R.; Venturi, A.; Verdini, P. G.; Awunor, O.; Blair, G. A.; Cowan, G.; Garcia-Bellido, A.; Green, M. G.; Medcalf, T.; Misiejuk, A.; Strong, J. A.; Teixeira-Dias, P.; Clifft, R. W.; Edgecock, T. R.; Norton, P. R.; Tomalin, I. R.; Ward, J. J.; Bloch-Devaux, B.; Boumediene, D.; Colas, P.; Fabbro, B.; Lançon, E.; Lemaire, M.-C.; Locci, E.; Perez, P.; Rander, J.; Tuchming, B.; Vallage, B.; Litke, A. M.; Taylor, G.; Booth, C. N.; Cartwright, S.; Combley, F.; Hodgson, P. N.; Lehto, M.; Thompson, L. F.; Böhrer, A.; Brandt, S.; Grupen, C.; Hess, J.; Ngac, A.; Prange, G.; Borean, C.; Giannini, G.; He, H.; Putz, J.; Rothberg, J.; Armstrong, S. R.; Berkelman, K.; Cranmer, K.; Ferguson, D. P. S.; Gao, Y.; González, S.; Hayes, O. J.; Hu, H.; Jin, S.; Kile, J.; McNamara, P. A., III; Nielsen, J.; Pan, Y. B.; von Wimmersperg-Toeller, J. H.; Wiedenmann, W.; Wu, J.; Wu, S. L.; Wu, X.; Zobernig, G.; Dissertori, G.; Abdallah, J.; Abreu, P.; Adam, W.; Adzic, P.; Albrecht, T.; Alderweireld, T.; Alemany-Fernandez, R.; Allmendinger, T.; Allport, P. P.; Amaldi, U.; Amapane, N.; Amato, S.; Anashkin, E.; Andreazza, A.; Andringa, S.; Anjos, N.; Antilogus, P.; Apel, W.-D.; Arnoud, Y.; Ask, S.; Asman, B.; Augustin, J. E.; Augustinus, A.; Baillon, P.; Ballestrero, A.; Bambade, P.; Barbier, R.; Bardin, D.; Barker, G. J.; Baroncelli, A.; Battaglia, M.; Baubillier, M.; Becks, K.-H.; Begalli, M.; Behrmann, A.; Ben-Haim, E.; Benekos, N.; Benvenuti, A.; Berat, C.; Berggren, M.; Berntzon, L.; Bertrand, D.; Besancon, M.; Besson, N.; Bloch, D.; Blom, M.; Bluj, M.; Bonesini, M.; Boonekamp, M.; Booth, P. S. L.; Borisov, G.; Botner, O.; Bouquet, B.; Bowcock, T. J. V.; Boyko, I.; Bracko, M.; Brenner, R.; Brodet, E.; Bruckman, P.; Brunet, J. M.; Buschbeck, B.; Buschmann, P.; Calvi, M.; Camporesi, T.; Canale, V.; Carena, F.; Castro, N.; Cavallo, F.; Chapkin, M.; Charpentier, P.; Checchia, P.; Chierici, R.; Chliapnikov, P.; Chudoba, J.; Chung, S. U.; Cieslik, K.; Collins, P.; Contri, R.; Cosme, G.; Cossutti, F.; Costa, M. J.; Crennell, D.; Cuevas, J.; D'Hondt, J.; Dalmau, J.; da Silva, T.; da Silva, W.; Della Ricca, G.; de Angelis, A.; de Boer, W.; de Clercq, C.; de Lotto, B.; de Maria, N.; de Min, A.; de Paula, L.; di Ciaccio, L.; di Simone, A.; Doroba, K.; Drees, J.; Eigen, G.; Ekelof, T.; Ellert, M.; Elsing, M.; Espirito Santo, M. C.; Fanourakis, G.; Fassouliotis, D.; Feindt, M.; Fernandez, J.; Ferrer, A.; Ferro, F.; Flagmeyer, U.; Foeth, H.; Fokitis, E.; Fulda-Quenzer, F.; Fuster, J.; Gandelman, M.; Garcia, C.; Gavillet, P.; Gazis, E.; Gokieli, R.; Golob, B.; Gomez-Ceballos, G.; Goncalves, P.; Graziani, E.; Grosdidier, G.; Grzelak, K.; Guy, J.; Haag, C.; Hallgren, A.; Hamacher, K.; Hamilton, K.; Haug, S.; Hauler, F.; Hedberg, V.; Hennecke, M.; Herr, H.; Hoffman, J.; Holmgren, S.-O.; Holt, P. J.; Houlden, M. A.; Hultqvist, K.; Jackson, J. N.; Jarlskog, G.; Jarry, P.; Jeans, D.; Johansson, E. K.; Johansson, P. D.; Jonsson, P.; Joram, C.; Jungermann, L.; Kapusta, F.; Katsanevas, S.; Katsoufis, E.; Kernel, G.; Kersevan, B. P.; Kerzel, U.; King, B. T.; Kjaer, N. J.; Kluit, P.; Kokkinias, P.; Kourkoumelis, C.; Kouznetsov, O.; Krumstein, Z.; Kucharczyk, M.; Lamsa, J.; Leder, G.; Ledroit, F.; Leinonen, L.; Leitner, R.; Lemonne, J.; Lepeltier, V.; Lesiak, T.; Liebig, W.; Liko, D.; Lipniacka, A.; Lopes, J. H.; Lopez, J. M.; Loukas, D.; Lutz, P.; Lyons, L.; MacNaughton, J.; Malek, A.; Maltezos, S.; Mandl, F.; Marco, J.; Marco, R.; Marechal, B.; Margoni, M.; Marin, J.-C.; Mariotti, C.; Markou, A.; Martinez-Rivero, C.; Masik, J.; Mastroyiannopoulos, N.; Matorras, F.; Matteuzzi, C.; Mazzucato, F.; Mazzucato, M.; Mc Nulty, R.; Meroni, C.; Migliore, E.; Mitaroff, W.; Mjoernmark, U.; Moa, T.; Moch, M.; Moenig, K.; Monge, R.; Montenegro, J.; Moraes, D.; Moreno, S.; Morettini, P.; Mueller, U.; Muenich, K.; Mulders, M.; Mundim, L.; Murray, W.; Muryn, B.; Myatt, G.; Myklebust, T.; Nassiakou, M.; Navarria, F.; Nawrocki, K.; Nicolaidou, R.; Nikolenko, M.; Oblakowska-Mucha, A.; Obraztsov, V.; Olshevski, A.; Onofre, A.; Orava, R.; Osterberg, K.; Ouraou, A.; Oyanguren, A.; Paganoni, M.; Paiano, S.; Palacios, J. P.; Palka, H.; Papadopoulou, T. D.; Pape, L.; Parkes, C.; Parodi, F.; Parzefall, U.; Passeri, A.; Passon, O.; Peralta, L.; Perepelitsa, V.; Perrotta, A.; Petrolini, A.; Piedra, J.; Pieri, L.; Pierre, F.; Pimenta, M.; Piotto, E.; Podobnik, T.; Poireau, V.; Pol, M. E.; Polok, G.; Pozdniakov, V.; Pukhaeva, N.; Pullia, A.; Rames, J.; Read, A.; Rebecchi, P.; Rehn, J.; Reid, D.; Reinhardt, R.; Renton, P.; Richard, F.; Ridky, J.; Rivero, M.; Rodriguez, D.; Romero, A.; Ronchese, P.; Roudeau, P.; Rovelli, T.; Ruhlmann-Kleider, V.; Ryabtchikov, D.; Sadovsky, A.; Salmi, L.; Salt, J.; Sander, C.; Savoy-Navarro, A.; Schwickerath, U.; Segar, A.; Sekulin, R.; Siebel, M.; Sisakian, A.; Smadja, G.; Smirnova, O.; Sokolov, A.; Sopczak, A.; Sosnowski, R.; Spassov, T.; Stanitzki, M.; Stocchi, A.; Strauss, J.; Stugu, B.; Szczekowski, M.; Szeptycka, M.; Szumlak, T.; Tabarelli, T.; Taffard, A. C.; Tegenfeldt, F.; Timmermans, J.; Tkatchev, L.; Tobin, M.; Todorovova, S.; Tome, B.; Tonazzo, A.; Tortosa, P.; Travnicek, P.; Treille, D.; Tristram, G.; Trochimczuk, M.; Troncon, C.; Turluer, M.-L.; Tyapkin, I. A.; Tyapkin, P.; Tzamarias, S.; Uvarov, V.; Valenti, G.; van Dam, P.; van Eldik, J.; van Remortel, N.; van Vulpen, I.; Vegni, G.; Veloso, F.; Venus, W.; Verdier, P.; Verzi, V.; Vilanova, D.; Vitale, L.; Vrba, V.; Wahlen, H.; Washbrook, A. J.; Weiser, C.; Wicke, D.; Wickens, J.; Wilkinson, G.; Winter, M.; Witek, M.; Yushchenko, O.; Zalewska, A.; Zalewski, P.; Zavrtanik, D.; Zhuravlov, V.; Zimin, N. I.; Zintchenko, A.; Zupan, M.; Achard, P.; Adriani, O.; Aguilar-Benitez, M.; Alcaraz, J.; Alemanni, G.; Allaby, J.; Aloisio, A.; Alviggi, M. G.; Anderhub, H.; Andreev, V. P.; Anselmo, F.; Arefiev, A.; Azemoon, T.; Aziz, T.; Bagnaia, P.; Bajo, A.; Baksay, G.; Baksay, L.; Baldew, S. V.; Banerjee, S.; Banerjee, Sw.; Barczyk, A.; Barillère, R.; Bartalini, P.; Basile, M.; Batalova, N.; Battiston, R.; Bay, A.; Becattini, F.; Becker, U.; Behner, F.; Bellucci, L.; Berbeco, R.; Berdugo, J.; Berges, P.; Bertucci, B.; Betev, B. L.; Biasini, M.; Biglietti, M.; Biland, A.; Blaising, J. J.; Blyth, S. C.; Bobbink, G. J.; Böhm, A.; Boldizsar, L.; Borgia, B.; Bottai, S.; Bourilkov, D.; Bourquin, M.; Braccini, S.; Branson, J. G.; Brochu, F.; Burger, J. D.; Burger, W. J.; Cai, X. D.; Capell, M.; Cara Romeo, G.; Carlino, G.; Cartacci, A.; Casaus, J.; Cavallari, F.; Cavallo, N.; Cecchi, C.; Cerrada, M.; Chamizo, M.; Chang, Y. H.; Chemarin, M.; Chen, A.; Chen, G.; Chen, G. M.; Chen, H. F.; Chen, H. S.; Chiefari, G.; Cifarelli, L.; Cindolo, F.; Clare, I.; Clare, R.; Coignet, G.; Colino, N.; Costantini, S.; de La Cruz, B.; Cucciarelli, S.; de Asmundis, R.; Déglon, P.; Debreczeni, J.; Degré, A.; Dehmelt, K.; Deiters, K.; Della Volpe, D.; Delmeire, E.; Denes, P.; Denotaristefani, F.; de Salvo, A.; Diemoz, M.; Dierckxsens, M.; Dionisi, C.; Dittmar, M.; Doria, A.; Dova, M. T.; Duchesneau, D.; Duda, M.; Echenard, B.; Eline, A.; El Hage, A.; El Mamouni, H.; Engler, A.; Eppling, F. J.; Extermann, P.; Falagan, M. A.; Falciano, S.; Favara, A.; Fay, J.; Fedin, O.; Felcini, M.; Ferguson, T.; Fesefeldt, H.; Fiandrini, E.; Field, J. H.; Filthaut, F.; Fisher, P. H.; Fisher, W.; Forconi, G.; Freudenreich, K.; Furetta, C.; Galaktionov, Yu.; Ganguli, S. N.; Garcia-Abia, P.; Gataullin, M.; Gentile, S.; Giagu, S.; Gong, Z. F.; Grenier, G.; Grimm, O.; Gruenewald, M. W.; Guida, M.; Gupta, V. K.; Gurtu, A.; Gutay, L. J.; Haas, D.; Hatzifotiadou, D.; Hebbeker, T.; Hervé, A.; Hirschfelder, J.; Hofer, H.; Hohlmann, M.; Holzner, G.; Hou, S. R.; Hu, J.; Jin, B. N.; Jindal, P.; Jones, L. W.; de Jong, P.; Josa-Mutuberría, I.; Kaur, M.; Kienzle-Focacci, M. N.; Kim, J. K.; Kirkby, J.; Kittel, W.; Klimentov, A.; König, A. C.; Kopal, M.; Koutsenko, V.; Kräber, M.; Kraemer, R. W.; Krüger, A.; Kunin, A.; Ladron de Guevara, P.; Laktineh, I.; Landi, G.; Lebeau, M.; Lebedev, A.; Lebrun, P.; Lecomte, P.; Lecoq, P.; Le Coultre, P.; Le Goff, J. M.; Leiste, R.; Levtchenko, M.; Levtchenko, P.; Li, C.; Likhoded, S.; Lin, C. H.; Lin, W. T.; Linde, F. L.; Lista, L.; Liu, Z. A.; Lohmann, W.; Longo, E.; Lu, Y. S.; Luci, C.; Luminari, L.; Lustermann, W.; Ma, W. G.; Malgeri, L.; Malinin, A.; Ma Na, C.; Mans, J.; Martin, J. P.; Marzano, F.; Mazumdar, K.; McNeil, R. R.; Mele, S.; Merola, L.; Meschini, M.; Metzger, W. J.; Mihul, A.; Milcent, H.; Mirabelli, G.; Mnich, J.; Mohanty, G. B.; Muanza, G. S.; Muijs, A. J. M.; Musicar, B.; Musy, M.; Nagy, S.; Natale, S.; Napolitano, M.; Nessi-Tedaldi, F.; Newman, H.; Nisati, A.; Novak, T.; Nowak, H.; Ofierzynski, R.; Organtini, G.; Pal, I.; Palomares, C.; Paolucci, P.; Paramatti, R.; Passaleva, G.; Patricelli, S.; Paul, T.; Pauluzzi, M.; Paus, C.; Pauss, F.; Pedace, M.; Pensotti, S.; Perret-Gallix, D.; Piccolo, D.; Pierella, F.; Pieri, M.; Pioppi, M.; Piroué, P. A.; Pistolesi, E.; Plyaskin, V.; Pohl, M.; Pojidaev, V.; Pothier, J.; Prokofiev, D.; Rahal-Callot, G.; Rahaman, M. A.; Raics, P.; Raja, N.; Ramelli, R.; Rancoita, P. G.; Ranieri, R.; Raspereza, A.; Razis, P.; Rembeczki, S.; Ren, D.; Rescigno, M.; Reucroft, S.; Riemann, S.; Riles, K.; Roe, B. P.; Romero, L.; Rosca, A.; Rosemann, C.; Rosenbleck, C.; Rosier-Lees, S.; Roth, S.; Rubio, J. A.; Ruggiero, G.; Rykaczewski, H.; Sakharov, A.; Saremi, S.; Sarkar, S.; Salicio, J.; Sanchez, E.; Schäfer, C.; Schegelsky, V.; Schopper, H.; Schotanus, D. J.; Sciacca, C.; Servoli, L.; Shevchenko, S.; Shivarov, N.; Shoutko, V.; Shumilov, E.; Shvorob, A.; Son, D.; Souga, C.; Spillantini, P.; Steuer, M.; Stickland, D. P.; Stoyanov, B.; Straessner, A.; Sudhakar, K.; Sultanov, G.; Sun, L. Z.; Sushkov, S.; Suter, H.; Swain, J. D.; Szillasi, Z.; Tang, X. W.; Tarjan, P.; Tauscher, L.; Taylor, L.; Tellili, B.; Teyssier, D.; Timmermans, C.; Ting, S. C. C.; Ting, S. M.; Tonwar, S. C.; Tóth, J.; Tully, C.; Tung, K. L.; Ulbricht, J.; Valente, E.; van de Walle, R. T.; Vasquez, R.; Vesztergombi, G.; Vetlitsky, I.; Viertel, G.; Vivargent, M.; Vlachos, S.; Vodopianov, I.; Vogel, H.; Vogt, H.; Vorobiev, I.; Vorobyov, A. A.; Wadhwa, M.; Wang, Q.; Wang, X. L.; Wang, Z. M.; Weber, M.; Wynhoff, S.; Xia, L.; Xu, Z. Z.; Yamamoto, J.; Yang, B. Z.; Yang, C. G.; Yang, H. J.; Yang, M.; Yeh, S. C.; Zalite, An.; Zalite, Yu.; Zhang, Z. P.; Zhao, J.; Zhu, G. Y.; Zhu, R. Y.; Zhuang, H. L.; Zichichi, A.; Zimmermann, B.; Zöller, M.; Abbiendi, G.; Ainsley, C.; Åkesson, P. F.; Alexander, G.; Allison, J.; Amaral, P.; Anagnostou, G.; Anderson, K. J.; Asai, S.; Axen, D.; Azuelos, G.; Bailey, I.; Barberio, E.; Barillari, T.; Barlow, R. J.; Batley, R. J.; Bechtle, P.; Behnke, T.; Bell, K. W.; Bell, P. J.; Bella, G.; Bellerive, A.; Benelli, G.; Bethke, S.; Biebel, O.; Boeriu, O.; Bock, P.; Boutemeur, M.; Braibant, S.; Brigliadori, L.; Brown, R. M.; Buesser, K.; Burckhart, H. J.; Campana, S.; Carnegie, R. K.; Carter, A. A.; Carter, J. R.; Chang, C. Y.; Charlton, D. G.; Ciocca, C.; Csilling, A.; Cuffiani, M.; Dado, S.; de Jong, S.; de Roeck, A.; de Wolf, E. A.; Desch, K.; Dienes, B.; Donkers, M.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I. P.; Etzion, E.; Fabbri, F.; Feld, L.; Ferrari, P.; Fiedler, F.; Fleck, I.; Ford, M.; Frey, A.; Gagnon, P.; Gary, J. W.; Gascon-Shotkin, S. M.; Gaycken, G.; Geich-Gimbel, C.; Giacomelli, G.; Giacomelli, P.; Giunta, M.; Goldberg, J.; Gross, E.; Grunhaus, J.; Gruwé, M.; Günther, P. O.; Gupta, A.; Hajdu, C.; Hamann, M.; Hanson, G. G.; Harel, A.; Hauschild, M.; Hawkes, C. M.; Hawkings, R.; Hemingway, R. J.; Herten, G.; Heuer, R. D.; Hill, J. C.; Hoffman, K.; Horváth, D.; Igo-Kemenes, P.; Ishii, K.; Jeremie, H.; Jost, U.; Jovanovic, P.; Junk, T. R.; Kanaya, N.; Kanzaki, J.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Keeler, R. K.; Kellogg, R. G.; Kennedy, B. W.; Kluth, S.; Kobayashi, T.; Kobel, M.; Komamiya, S.; Krämer, T.; Krieger, P.; von Krogh, J.; Kruger, K.; Kuhl, T.; Kupper, M.; Lafferty, G. D.; Landsman, H.; Lanske, D.; Layter, J. G.; Lellouch, D.; Letts, J.; Levinson, L.; Lillich, J.; Lloyd, S. L.; Loebinger, F. K.; Lu, J.; Ludwig, A.; Ludwig, J.; Mader, W.; Marcellini, S.; Martin, A. J.; Masetti, G.; Mashimo, T.; Mättig, P.; McKenna, J.; McPherson, R. A.; Meijers, F.; Menges, W.; Merritt, F. S.; Mes, H.; Meyer, N.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D. J.; Moed, S.; Mohr, W.; Mori, T.; Mutter, A.; Nagai, K.; Nakamura, I.; Nanjo, H.; Neal, H. A.; Nisius, R.; O'Neale, S. W.; Oh, A.; Oreglia, M. J.; Orito, S.; Pahl, C.; Pásztor, G.; Pater, J. R.; Pilcher, J. E.; Pinfold, J.; Plane, D. E.; Poli, B.; Pooth, O.; Przybycień, M.; Quadt, A.; Rabbertz, K.; Rembser, C.; Renkel, P.; Roney, J. M.; Rozen, Y.; Runge, K.; Sachs, K.; Saeki, T.; Sarkisyan, E. K. G.; Schaile, A. D.; Schaile, O.; Scharff-Hansen, P.; Schieck, J.; Schörner-Sadenius, T.; Schröder, M.; Schumacher, M.; Scott, W. G.; Seuster, R.; Shears, T. G.; Shen, B. C.; Sherwood, P.; Skuja, A.; Smith, A. M.; Sobie, R.; Söldner-Rembold, S.; Spano, F.; Stahl, A.; Strom, D.; Ströhmer, R.; Tarem, S.; Tasevsky, M.; Teuscher, R.; Thomson, M. A.; Torrence, E.; Toya, D.; Tran, P.; Trigger, I.; Trócsányi, Z.; Tsur, E.; Turner-Watson, M. F.; Ueda, I.; Ujvári, B.; Vollmer, C. F.; Vannerem, P.; Vértesi, R.; Verzocchi, M.; Voss, H.; Vossebeld, J.; Ward, C. P.; Ward, D. R.; Watkins, P. M.; Watson, A. T.; Watson, N. K.; Wells, P. S.; Wengler, T.; Wermes, N.; Wilson, G. W.; Wilson, J. A.; Wolf, G.; Wyatt, T. R.; Yamashita, S.; Zer-Zion, D.; Zivkovic, L.; Heinemeyer, S.; Pilaftsis, A.; Weiglein, G.

    2006-09-01

    The four LEP collaborations, ALEPH, DELPHI, L3 and OPAL, have searched for the neutral Higgs bosons which are predicted by the Minimal Supersymmetric standard model (MSSM). The data of the four collaborations are statistically combined and examined for their consistency with the background hypothesis and with a possible Higgs boson signal. The combined LEP data show no significant excess of events which would indicate the production of Higgs bosons. The search results are used to set upper bounds on the cross-sections of various Higgs-like event topologies. The results are interpreted within the MSSM in a number of “benchmark” models, including CP-conserving and CP-violating scenarios. These interpretations lead in all cases to large exclusions in the MSSM parameter space. Absolute limits are set on the parameter cosβ and, in some scenarios, on the masses of neutral Higgs bosons.

  13. Diffuse Large B-Cell Lymphoma: Prospective Multicenter Comparison of Early Interim FLT PET/CT versus FDG PET/CT with IHP, EORTC, Deauville, and PERCIST Criteria for Early Therapeutic Monitoring

    PubMed Central

    Minamimoto, Ryogo; Fayad, Luis; Advani, Ranjana; Vose, Julie; Macapinlac, Homer; Meza, Jane; Hankins, Jordan; Mottaghy, Felix; Juweid, Malik

    2016-01-01

    Purpose To compare the performance characteristics of interim fluorine 18 (18F) fluorodeoxyglucose (FDG) positron emission tomography (PET)/computed tomography (CT) (after two cycles of chemotherapy) by using the most prominent standardized interpretive criteria (including International Harmonization Project [IHP] criteria, European Organization for Research and Treatment of Cancer [EORTC] criteria, and PET Response Criteria in Solid Tumors (PERCIST) versus those of interim 18F fluorothymidine (FLT) PET/CT and simple visual interpretation. Materials and Methods This HIPAA-compliant prospective study was approved by the institutional review boards, and written informed consent was obtained. Patients with newly diagnosed diffuse large B-cell lymphoma (DLBCL) underwent both FLT and FDG PET/CT 18–24 days after two cycles of rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone or rituximab, etoposide, prednisone, vincristine, cyclophosphamide, and doxorubicin. For FDG PET/CT interpretation, IHP criteria, EORTC criteria, PERCIST, Deauville criteria, standardized uptake value, total lesion glycolysis, and metabolic tumor volume were used. FLT PET/CT images were interpreted with visual assessment by two reviewers in consensus. The interim (after cycle 2) FDG and FLT PET/CT studies were then compared with the end-of-treatment FDG PET/CT studies to determine which interim examination and/or criteria best predicted the result after six cycles of chemotherapy. Results From November 2011 to May 2014, there were 60 potential patients for inclusion, of whom 46 patients (24 men [mean age, 60.9 years ± 13.7; range, 28–78 years] and 22 women [mean age, 57.2 years ± 13.4; range, 25–76 years]) fulfilled the criteria. Thirty-four patients had complete response, and 12 had residual disease at the end of treatment. FLT PET/CT had a significantly higher positive predictive value (PPV) (91%) in predicting residual disease than did any FDG PET/CT interpretation method (42%–46%). No difference in negative predictive value (NPV) was found between FLT PET/CT (94%) and FDG PET/CT (82%–95%), regardless of the interpretive criteria used. FLT PET/CT showed statistically higher (P < .001–.008) or similar NPVs than did FDG PET/CT. Conclusion Early interim FLT PET/CT had a significantly higher PPV than standardized FDG PET/CT–based interpretation for therapeutic response assessment in DLBCL. © RSNA, 2016 Online supplemental material is available for this article. PMID:26854705

  14. Statistical Issues in Testing Conformance with the Quantitative Imaging Biomarker Alliance (QIBA) Profile Claims.

    PubMed

    Obuchowski, Nancy A; Buckler, Andrew; Kinahan, Paul; Chen-Mayer, Heather; Petrick, Nicholas; Barboriak, Daniel P; Bullen, Jennifer; Barnhart, Huiman; Sullivan, Daniel C

    2016-04-01

    A major initiative of the Quantitative Imaging Biomarker Alliance is to develop standards-based documents called "Profiles," which describe one or more technical performance claims for a given imaging modality. The term "actor" denotes any entity (device, software, or person) whose performance must meet certain specifications for the claim to be met. The objective of this paper is to present the statistical issues in testing actors' conformance with the specifications. In particular, we present the general rationale and interpretation of the claims, the minimum requirements for testing whether an actor achieves the performance requirements, the study designs used for testing conformity, and the statistical analysis plan. We use three examples to illustrate the process: apparent diffusion coefficient in solid tumors measured by MRI, change in Perc 15 as a biomarker for the progression of emphysema, and percent change in solid tumor volume by computed tomography as a biomarker for lung cancer progression. Copyright © 2016 The Association of University Radiologists. All rights reserved.

  15. Analysis of the procedures used to evaluate suicide crime scenes in Brazil: a statistical approach to interpret reports.

    PubMed

    Bruni, Aline Thaís; Velho, Jesus Antonio; Ferreira, Arthur Serra Lopes; Tasso, Maria Júlia; Ferrari, Raíssa Santos; Yoshida, Ricardo Luís; Dias, Marcos Salvador; Leite, Vitor Barbanti Pereira

    2014-08-01

    This study uses statistical techniques to evaluate reports on suicide scenes; it utilizes 80 reports from different locations in Brazil, randomly collected from both federal and state jurisdictions. We aimed to assess a heterogeneous group of cases in order to obtain an overall perspective of the problem. We evaluated variables regarding the characteristics of the crime scene, such as the detected traces (blood, instruments and clothes) that were found and we addressed the methodology employed by the experts. A qualitative approach using basic statistics revealed a wide distribution as to how the issue was addressed in the documents. We examined a quantitative approach involving an empirical equation and we used multivariate procedures to validate the quantitative methodology proposed for this empirical equation. The methodology successfully identified the main differences in the information presented in the reports, showing that there is no standardized method of analyzing evidences. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  16. Model Fit and Item Factor Analysis: Overfactoring, Underfactoring, and a Program to Guide Interpretation.

    PubMed

    Clark, D Angus; Bowles, Ryan P

    2018-04-23

    In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker-Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.

  17. Accuracy and reliability of tablet computer as an imaging console for detection of radiological signs of acute appendicitis using PACS workstation as reference standard.

    PubMed

    Awais, Muhammad; Khan, Dawar Burhan; Barakzai, Muhammad Danish; Rehman, Abdul; Baloch, Noor Ul-Ain; Nadeem, Naila

    2018-05-01

    To ascertain the accuracy and reliability of tablet as an imaging console for detection of radiological signs of acute appendicitis [on focused appendiceal computed tomography (FACT)] using Picture Archiving and Communication System (PACS) workstation as reference standard. From January, 2014 to June, 2015, 225 patients underwent FACT at our institution. These scans were blindly re-interpreted by an independent consultant radiologist, first on PACS workstation and, two weeks later, on tablet. Scans were interpreted for the presence of radiological signs of acute appendicitis. Accuracy of tablet was calculated using PACS as reference standard. Kappa (κ) statistics were calculated as a measure of reliability. Of 225 patients, 99 had radiological evidence of acute appendicitis on PACS workstation. Tablet was 100% accurate in detecting radiological signs of acute appendicitis. Appendicoliths, free fluid, lymphadenopathy, phlegmon/abscess, and perforation were identified on PACS in 90, 43, 39, 10, and 12 scans, respectively. There was excellent agreement between tablet and PACS for detection of appendicolith (к = 0.924), phlegmon/abscess (к = 0.904), free fluid (к = 0.863), lymphadenopathy (к = 0.879), and perforation (к = 0.904). Tablet computer, as an imaging console, was highly reliable and was as accurate as PACS workstation for the radiological diagnosis of acute appendicitis.

  18. Assessing changes in drought characteristics with standardized indices

    NASA Astrophysics Data System (ADS)

    Vidal, Jean-Philippe; Najac, Julien; Martin, Eric; Franchistéguy, Laurent; Soubeyroux, Jean-Michel

    2010-05-01

    Standardized drought indices like the Standardized Precipitation Index (SPI) are more and more frequently adopted for drought reconstruction, monitoring and forecasting, and the SPI has been recently recommended by the World Meteorological Organization to characterize meteorological droughts. Such indices are based on the statistical distribution of a hydrometeorological variable (e.g., precipitation) in a given reference climate, and a drought event is defined as a period with continuously negative index values. Because of the way these indices are constructed, some issues may arise when using them in a non-stationnary climate. This work thus aims at highlighting such issues and demonstrating the different ways these indices may - or may not - be applied and interpreted in the context of an anthropogenic climate change. Three major points are detailed through examples taken from both a high-resolution gridded reanalysis dataset over France and transient projections from the ARPEGE general circulation model downscaled over France. The first point deals with the choice of the reference climate, and more specifically its type (from observations/reanalysis or from present-day modelled climate) and its record period. Second, the interpretation of actual changes are closely linked with the type of the selected drought feature over a future period: mean index value, under-threshold frequency, or drought event characteristics (number, mean duration and magnitude, seasonality, etc.). Finally, applicable approaches as well as related uncertainties depend on the availability of data from a future climate, whether in the form of a fully transient time series from present-day or only a future time slice. The projected evolution of drought characteristics under climate change must inform present decisions on long-term water resources planning. An assessment of changes in drought characteristics should therefore provide water managers with appropriate information that can help building effective adaptation strategies. This work thus aims at showing the potential of standardized indices to describe changes in drought characteristics, but also possible pitfalls and potentially misleading interpretations.

  19. Philosophical perspectives on quantum chaos: Models and interpretations

    NASA Astrophysics Data System (ADS)

    Bokulich, Alisa Nicole

    2001-09-01

    The problem of quantum chaos is a special case of the larger problem of understanding how the classical world emerges from quantum mechanics. While we have learned that chaos is pervasive in classical systems, it appears to be almost entirely absent in quantum systems. The aim of this dissertation is to determine what implications the interpretation of quantum mechanics has for attempts to explain the emergence of classical chaos. There are three interpretations of quantum mechanics that have set out programs for solving the problem of quantum chaos: the standard interpretation, the statistical interpretation, and the deBroglie-Bohm causal interpretation. One of the main conclusions of this dissertation is that an interpretation alone is insufficient for solving the problem of quantum chaos and that the phenomenon of decoherence must be taken into account. Although a completely satisfactory solution of the problem of quantum chaos is still outstanding, I argue that the deBroglie-Bohm interpretation with the help of decoherence outlines the most promising research program to pursue. In addition to making a contribution to the debate in the philosophy of physics concerning the interpretation of quantum mechanics, this dissertation reveals two important methodological lessons for the philosophy of science. First, issues of reductionism and intertheoretic relations cannot be divorced from questions concerning the interpretation of the theories involved. Not only is the exploration of intertheoretic relations a central part of the articulation and interpretation of an individual theory, but the very terms used to discuss intertheoretic relations, such as `state' and `classical limit', are themselves defined by particular interpretations of the theory. The second lesson that emerges is that, when it comes to characterizing the relationship between classical chaos and quantum mechanics, the traditional approaches to intertheoretic relations, namely reductionism and theoretical pluralism, are inadequate. The fruitful ways in which models have been used in quantum chaos research point to the need for a new framework for addressing intertheoretic relations that focuses on models rather than laws.

  20. Statistical Reform in School Psychology Research: A Synthesis

    ERIC Educational Resources Information Center

    Swaminathan, Hariharan; Rogers, H. Jane

    2007-01-01

    Statistical reform in school psychology research is discussed in terms of research designs, measurement issues, statistical modeling and analysis procedures, interpretation and reporting of statistical results, and finally statistics education.

  1. Interpretation of digital breast tomosynthesis: preliminary study on comparison with picture archiving and communication system (PACS) and dedicated workstation.

    PubMed

    Kim, Young Seon; Chang, Jung Min; Yi, Ann; Shin, Sung Ui; Lee, Myung Eun; Kim, Won Hwa; Cho, Nariya; Moon, Woo Kyung

    2017-08-01

    To compare the diagnostic accuracy and efficiency in the interpretation of digital breast tomosynthesis (DBT) images using a picture archiving and communication system (PACS) and a dedicated workstation. 97 DBT images obtained for screening or diagnostic purposes were stored in both a workstation and a PACS and evaluated in combination with digital mammography by three independent radiologists retrospectively. Breast Imaging-Reporting and Data System final assessments and likelihood of malignancy (%) were assigned and the interpretation time when using the workstation and PACS was recorded. Receiver operating characteristic curve analysis, sensitivities and specificities were compared with histopathological examination and follow-up data as a reference standard. Area under the receiver operating characteristic curve values for cancer detection (0.839 vs 0.815, p = 0.6375) and sensitivity (81.8% vs 75.8%, p = 0.2188) showed no statistically significant differences between the workstation and PACS. However, specificity was significantly higher when analysing on the workstation than when using PACS (83.7% vs 76.9%, p = 0.009). When evaluating DBT images using PACS, only one case was deemed necessary to be reanalysed using the workstation. The mean time to interpret DBT images on PACS (1.68 min/case) was significantly longer than that to interpret on the workstation (1.35 min/case) (p < 0.0001). Interpretation of DBT images using PACS showed comparable diagnostic performance to a dedicated workstation, even though it required a longer reading time. Advances in knowledge: Interpretation of DBT images using PACS is an alternative to evaluate the images when a dedicated workstation is not available.

  2. A Proposed Interpretation of the ISO 10015 and Implications for HRD Theory and Research

    ERIC Educational Resources Information Center

    Jacobs, Ronald L.; Wang, Bryan

    2007-01-01

    While recent discussions of ISO 10015- Guidelines for Training have done much to promote the need for the standard, no interpretation of the standard has been presented that would guide its actual implementation. This paper proposes an interpretation of the ISO 10015 based on the specifications of the guideline and two other standards related to…

  3. Interpretation of the results of statistical measurements. [search for basic probability model

    NASA Technical Reports Server (NTRS)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  4. Report: New analytical and statistical approaches for interpreting the relationships among environmental stressors and biomarkers

    EPA Science Inventory

    The broad topic of biomarker research has an often-overlooked component: the documentation and interpretation of the surrounding chemical environment and other meta-data, especially from visualization, analytical, and statistical perspectives (Pleil et al. 2014; Sobus et al. 2011...

  5. An asymptotic analysis of the logrank test.

    PubMed

    Strawderman, R L

    1997-01-01

    Asymptotic expansions for the null distribution of the logrank statistic and its distribution under local proportional hazards alternatives are developed in the case of iid observations. The results, which are derived from the work of Gu (1992) and Taniguchi (1992), are easy to interpret, and provide some theoretical justification for many behavioral characteristics of the logrank test that have been previously observed in simulation studies. We focus primarily upon (i) the inadequacy of the usual normal approximation under treatment group imbalance; and, (ii) the effects of treatment group imbalance on power and sample size calculations. A simple transformation of the logrank statistic is also derived based on results in Konishi (1991) and is found to substantially improve the standard normal approximation to its distribution under the null hypothesis of no survival difference when there is treatment group imbalance.

  6. A whirling plane of satellite galaxies around Centaurus A challenges cold dark matter cosmology

    NASA Astrophysics Data System (ADS)

    Müller, Oliver; Pawlowski, Marcel S.; Jerjen, Helmut; Lelli, Federico

    2018-02-01

    The Milky Way and Andromeda galaxies are each surrounded by a thin plane of satellite dwarf galaxies that may be corotating. Cosmological simulations predict that most satellite galaxy systems are close to isotropic with random motions, so those two well-studied systems are often interpreted as rare statistical outliers. We test this assumption using the kinematics of satellite galaxies around the Centaurus A galaxy. Our statistical analysis reveals evidence for corotation in a narrow plane: Of the 16 Centaurus A satellites with kinematic data, 14 follow a coherent velocity pattern aligned with the long axis of their spatial distribution. In standard cosmological simulations, <0.5% of Centaurus A–like systems show such behavior. Corotating satellite systems may be common in the universe, challenging small-scale structure formation in the prevailing cosmological paradigm.

  7. Descriptive and inferential statistical methods used in burns research.

    PubMed

    Al-Benna, Sammy; Al-Ajam, Yazan; Way, Benjamin; Steinstraesser, Lars

    2010-05-01

    Burns research articles utilise a variety of descriptive and inferential methods to present and analyse data. The aim of this study was to determine the descriptive methods (e.g. mean, median, SD, range, etc.) and survey the use of inferential methods (statistical tests) used in articles in the journal Burns. This study defined its population as all original articles published in the journal Burns in 2007. Letters to the editor, brief reports, reviews, and case reports were excluded. Study characteristics, use of descriptive statistics and the number and types of statistical methods employed were evaluated. Of the 51 articles analysed, 11(22%) were randomised controlled trials, 18(35%) were cohort studies, 11(22%) were case control studies and 11(22%) were case series. The study design and objectives were defined in all articles. All articles made use of continuous and descriptive data. Inferential statistics were used in 49(96%) articles. Data dispersion was calculated by standard deviation in 30(59%). Standard error of the mean was quoted in 19(37%). The statistical software product was named in 33(65%). Of the 49 articles that used inferential statistics, the tests were named in 47(96%). The 6 most common tests used (Student's t-test (53%), analysis of variance/co-variance (33%), chi(2) test (27%), Wilcoxon & Mann-Whitney tests (22%), Fisher's exact test (12%)) accounted for the majority (72%) of statistical methods employed. A specified significance level was named in 43(88%) and the exact significance levels were reported in 28(57%). Descriptive analysis and basic statistical techniques account for most of the statistical tests reported. This information should prove useful in deciding which tests should be emphasised in educating burn care professionals. These results highlight the need for burn care professionals to have a sound understanding of basic statistics, which is crucial in interpreting and reporting data. Advice should be sought from professionals in the fields of biostatistics and epidemiology when using more advanced statistical techniques. Copyright 2009 Elsevier Ltd and ISBI. All rights reserved.

  8. Measures of accuracy and performance of diagnostic tests.

    PubMed

    Drobatz, Kenneth J

    2009-05-01

    Diagnostic tests are integral to the practice of veterinary cardiology, any other specialty, and general veterinary medicine. Developing and understanding diagnostic tests is one of the cornerstones of clinical research. This manuscript describes the diagnostic test properties including sensitivity, specificity, predictive value, likelihood ratio, receiver operating characteristic curve. Review of practical book chapters and standard statistics manuscripts. Diagnostics such as sensitivity, specificity, predictive value, likelihood ratio, and receiver operating characteristic curve are described and illustrated. Basic understanding of how diagnostic tests are developed and interpreted is essential in reviewing clinical scientific papers and understanding evidence based medicine.

  9. Matching the Statistical Model to the Research Question for Dental Caries Indices with Many Zero Counts.

    PubMed

    Preisser, John S; Long, D Leann; Stamm, John W

    2017-01-01

    Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two data sets, one consisting of fictional dmft counts in 2 groups and the other on DMFS among schoolchildren from a randomized clinical trial comparing 3 toothpaste formulations to prevent incident dental caries, are analyzed with negative binomial hurdle, zero-inflated negative binomial, and marginalized zero-inflated negative binomial models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the randomized clinical trial were similar despite their distinctive interpretations. The choice of statistical model class should match the study's purpose, while accounting for the broad decline in children's caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. © 2017 S. Karger AG, Basel.

  10. Matching the Statistical Model to the Research Question for Dental Caries Indices with Many Zero Counts

    PubMed Central

    Preisser, John S.; Long, D. Leann; Stamm, John W.

    2017-01-01

    Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two datasets, one consisting of fictional dmft counts in two groups and the other on DMFS among schoolchildren from a randomized clinical trial (RCT) comparing three toothpaste formulations to prevent incident dental caries, are analysed with negative binomial hurdle (NBH), zero-inflated negative binomial (ZINB), and marginalized zero-inflated negative binomial (MZINB) models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the RCT were similar despite their distinctive interpretations. Choice of statistical model class should match the study’s purpose, while accounting for the broad decline in children’s caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. PMID:28291962

  11. A case study: application of statistical process control tool for determining process capability and sigma level.

    PubMed

    Chopra, Vikram; Bairagi, Mukesh; Trivedi, P; Nagar, Mona

    2012-01-01

    Statistical process control is the application of statistical methods to the measurement and analysis of variation process. Various regulatory authorities such as Validation Guidance for Industry (2011), International Conference on Harmonisation ICH Q10 (2009), the Health Canada guidelines (2009), Health Science Authority, Singapore: Guidance for Product Quality Review (2008), and International Organization for Standardization ISO-9000:2005 provide regulatory support for the application of statistical process control for better process control and understanding. In this study risk assessments, normal probability distributions, control charts, and capability charts are employed for selection of critical quality attributes, determination of normal probability distribution, statistical stability, and capability of production processes, respectively. The objective of this study is to determine tablet production process quality in the form of sigma process capability. By interpreting data and graph trends, forecasting of critical quality attributes, sigma process capability, and stability of process were studied. The overall study contributes to an assessment of process at the sigma level with respect to out-of-specification attributes produced. Finally, the study will point to an area where the application of quality improvement and quality risk assessment principles for achievement of six sigma-capable processes is possible. Statistical process control is the most advantageous tool for determination of the quality of any production process. This tool is new for the pharmaceutical tablet production process. In the case of pharmaceutical tablet production processes, the quality control parameters act as quality assessment parameters. Application of risk assessment provides selection of critical quality attributes among quality control parameters. Sequential application of normality distributions, control charts, and capability analyses provides a valid statistical process control study on process. Interpretation of such a study provides information about stability, process variability, changing of trends, and quantification of process ability against defective production. Comparative evaluation of critical quality attributes by Pareto charts provides the least capable and most variable process that is liable for improvement. Statistical process control thus proves to be an important tool for six sigma-capable process development and continuous quality improvement.

  12. 20 CFR Appendix A to Part 718 - Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays)

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Interpretation of Chest Roentgenograms (X-Rays) A Appendix A to Part 718 Employees' Benefits OFFICE OF WORKERS... Appendix A to Part 718—Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays) The... procedures are used in administering and interpreting X-rays and that the best available medical evidence...

  13. 20 CFR Appendix A to Part 718 - Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays)

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Interpretation of Chest Roentgenograms (X-Rays) A Appendix A to Part 718 Employees' Benefits OFFICE OF WORKERS... Appendix A to Part 718—Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays) The... procedures are used in administering and interpreting X-rays and that the best available medical evidence...

  14. 20 CFR Appendix A to Part 718 - Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays)

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Interpretation of Chest Roentgenograms (X-Rays) A Appendix A to Part 718 Employees' Benefits OFFICE OF WORKERS... Appendix A to Part 718—Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays) The... procedures are used in administering and interpreting X-rays and that the best available medical evidence...

  15. 20 CFR Appendix A to Part 718 - Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays)

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Interpretation of Chest Roentgenograms (X-Rays) A Appendix A to Part 718 Employees' Benefits OFFICE OF WORKERS... Appendix A to Part 718—Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays) The... procedures are used in administering and interpreting X-rays and that the best available medical evidence...

  16. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    PubMed

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.

  17. Interpretation of Confidence Interval Facing the Conflict

    ERIC Educational Resources Information Center

    Andrade, Luisa; Fernández, Felipe

    2016-01-01

    As literature has reported, it is usual that university students in statistics courses, and even statistics teachers, interpret the confidence level associated with a confidence interval as the probability that the parameter value will be between the lower and upper interval limits. To confront this misconception, class activities have been…

  18. Potential errors and misuse of statistics in studies on leakage in endodontics.

    PubMed

    Lucena, C; Lopez, J M; Pulgar, R; Abalos, C; Valderrama, M J

    2013-04-01

    To assess the quality of the statistical methodology used in studies of leakage in Endodontics, and to compare the results found using appropriate versus inappropriate inferential statistical methods. The search strategy used the descriptors 'root filling' 'microleakage', 'dye penetration', 'dye leakage', 'polymicrobial leakage' and 'fluid filtration' for the time interval 2001-2010 in journals within the categories 'Dentistry, Oral Surgery and Medicine' and 'Materials Science, Biomaterials' of the Journal Citation Report. All retrieved articles were reviewed to find potential pitfalls in statistical methodology that may be encountered during study design, data management or data analysis. The database included 209 papers. In all the studies reviewed, the statistical methods used were appropriate for the category attributed to the outcome variable, but in 41% of the cases, the chi-square test or parametric methods were inappropriately selected subsequently. In 2% of the papers, no statistical test was used. In 99% of cases, a statistically 'significant' or 'not significant' effect was reported as a main finding, whilst only 1% also presented an estimation of the magnitude of the effect. When the appropriate statistical methods were applied in the studies with originally inappropriate data analysis, the conclusions changed in 19% of the cases. Statistical deficiencies in leakage studies may affect their results and interpretation and might be one of the reasons for the poor agreement amongst the reported findings. Therefore, more effort should be made to standardize statistical methodology. © 2012 International Endodontic Journal.

  19. Multi-Reader ROC studies with Split-Plot Designs: A Comparison of Statistical Methods

    PubMed Central

    Obuchowski, Nancy A.; Gallas, Brandon D.; Hillis, Stephen L.

    2012-01-01

    Rationale and Objectives Multi-reader imaging trials often use a factorial design, where study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of the design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper we compare three methods of analysis for the split-plot design. Materials and Methods Three statistical methods are presented: Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean ANOVA approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power and confidence interval coverage of the three test statistics. Results The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% CIs fall close to the nominal coverage for small and large sample sizes. Conclusions The split-plot MRMC study design can be statistically efficient compared with the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rate, similar power, and nominal CI coverage, are available for this study design. PMID:23122570

  20. Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.

    PubMed

    Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L

    2012-12-01

    Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.

  1. Visualization of the variability of 3D statistical shape models by animation.

    PubMed

    Lamecker, Hans; Seebass, Martin; Lange, Thomas; Hege, Hans-Christian; Deuflhard, Peter

    2004-01-01

    Models of the 3D shape of anatomical objects and the knowledge about their statistical variability are of great benefit in many computer assisted medical applications like images analysis, therapy or surgery planning. Statistical model of shapes have successfully been applied to automate the task of image segmentation. The generation of 3D statistical shape models requires the identification of corresponding points on two shapes. This remains a difficult problem, especially for shapes of complicated topology. In order to interpret and validate variations encoded in a statistical shape model, visual inspection is of great importance. This work describes the generation and interpretation of statistical shape models of the liver and the pelvic bone.

  2. Standardization of Clinical Assessment and Sample Collection Across All PERCH Study Sites

    PubMed Central

    Prosperi, Christine; Baggett, Henry C.; Brooks, W. Abdullah; Deloria Knoll, Maria; Hammitt, Laura L.; Howie, Stephen R. C.; Kotloff, Karen L.; Levine, Orin S.; Madhi, Shabir A.; Murdoch, David R.; O’Brien, Katherine L.; Thea, Donald M.; Awori, Juliet O.; Bunthi, Charatdao; DeLuca, Andrea N.; Driscoll, Amanda J.; Ebruke, Bernard E.; Goswami, Doli; Hidgon, Melissa M.; Karron, Ruth A.; Kazungu, Sidi; Kourouma, Nana; Mackenzie, Grant; Moore, David P.; Mudau, Azwifari; Mwale, Magdalene; Nahar, Kamrun; Park, Daniel E.; Piralam, Barameht; Seidenberg, Phil; Sylla, Mamadou; Feikin, Daniel R.; Scott, J. Anthony G.; O’Brien, Katherine L.; Levine, Orin S.; Knoll, Maria Deloria; Feikin, Daniel R.; DeLuca, Andrea N.; Driscoll, Amanda J.; Fancourt, Nicholas; Fu, Wei; Hammitt, Laura L.; Higdon, Melissa M.; Kagucia, E. Wangeci; Karron, Ruth A.; Li, Mengying; Park, Daniel E.; Prosperi, Christine; Wu, Zhenke; Zeger, Scott L.; Watson, Nora L.; Crawley, Jane; Murdoch, David R.; Brooks, W. Abdullah; Endtz, Hubert P.; Zaman, Khalequ; Goswami, Doli; Hossain, Lokman; Jahan, Yasmin; Ashraf, Hasan; Howie, Stephen R. C.; Ebruke, Bernard E.; Antonio, Martin; McLellan, Jessica; Machuka, Eunice; Shamsul, Arifin; Zaman, Syed M.A.; Mackenzie, Grant; Scott, J. Anthony G.; Awori, Juliet O.; Morpeth, Susan C.; Kamau, Alice; Kazungu, Sidi; Kotloff, Karen L.; Tapia, Milagritos D.; Sow, Samba O.; Sylla, Mamadou; Tamboura, Boubou; Onwuchekwa, Uma; Kourouma, Nana; Toure, Aliou; Madhi, Shabir A.; Moore, David P.; Adrian, Peter V.; Baillie, Vicky L.; Kuwanda, Locadiah; Mudau, Azwifarwi; Groome, Michelle J.; Baggett, Henry C.; Thamthitiwat, Somsak; Maloney, Susan A.; Bunthi, Charatdao; Rhodes, Julia; Sawatwong, Pongpun; Akarasewi, Pasakorn; Thea, Donald M.; Mwananyanda, Lawrence; Chipeta, James; Seidenberg, Phil; Mwansa, James; wa Somwe, Somwe; Kwenda, Geoffrey

    2017-01-01

    Abstract Background. Variable adherence to standardized case definitions, clinical procedures, specimen collection techniques, and laboratory methods has complicated the interpretation of previous multicenter pneumonia etiology studies. To circumvent these problems, a program of clinical standardization was embedded in the Pneumonia Etiology Research for Child Health (PERCH) study. Methods. Between March 2011 and August 2013, standardized training on the PERCH case definition, clinical procedures, and collection of laboratory specimens was delivered to 331 clinical staff at 9 study sites in 7 countries (The Gambia, Kenya, Mali, South Africa, Zambia, Thailand, and Bangladesh), through 32 on-site courses and a training website. Staff competency was assessed throughout 24 months of enrollment with multiple-choice question (MCQ) examinations, a video quiz, and checklist evaluations of practical skills. Results. MCQ evaluation was confined to 158 clinical staff members who enrolled PERCH cases and controls, with scores obtained for >86% of eligible staff at each time-point. Median scores after baseline training were ≥80%, and improved by 10 percentage points with refresher training, with no significant intersite differences. Percentage agreement with the clinical trainer on the presence or absence of clinical signs on video clips was high (≥89%), with interobserver concordance being substantial to high (AC1 statistic, 0.62–0.82) for 5 of 6 signs assessed. Staff attained median scores of >90% in checklist evaluations of practical skills. Conclusions. Satisfactory clinical standardization was achieved within and across all PERCH sites, providing reassurance that any etiological or clinical differences observed across the study sites are true differences, and not attributable to differences in application of the clinical case definition, interpretation of clinical signs, or in techniques used for clinical measurements or specimen collection. PMID:28575355

  3. Interpreting “statistical hypothesis testing” results in clinical research

    PubMed Central

    Sarmukaddam, Sanjeev B.

    2012-01-01

    Difference between “Clinical Significance and Statistical Significance” should be kept in mind while interpreting “statistical hypothesis testing” results in clinical research. This fact is already known to many but again pointed out here as philosophy of “statistical hypothesis testing” is sometimes unnecessarily criticized mainly due to failure in considering such distinction. Randomized controlled trials are also wrongly criticized similarly. Some scientific method may not be applicable in some peculiar/particular situation does not mean that the method is useless. Also remember that “statistical hypothesis testing” is not for decision making and the field of “decision analysis” is very much an integral part of science of statistics. It is not correct to say that “confidence intervals have nothing to do with confidence” unless one understands meaning of the word “confidence” as used in context of confidence interval. Interpretation of the results of every study should always consider all possible alternative explanations like chance, bias, and confounding. Statistical tests in inferential statistics are, in general, designed to answer the question “How likely is the difference found in random sample(s) is due to chance” and therefore limitation of relying only on statistical significance in making clinical decisions should be avoided. PMID:22707861

  4. New physicochemical interpretations for the adsorption of food dyes on chitosan films using statistical physics treatment.

    PubMed

    Dotto, G L; Pinto, L A A; Hachicha, M A; Knani, S

    2015-03-15

    In this work, statistical physics treatment was employed to study the adsorption of food dyes onto chitosan films, in order to obtain new physicochemical interpretations at molecular level. Experimental equilibrium curves were obtained for the adsorption of four dyes (FD&C red 2, FD&C yellow 5, FD&C blue 2, Acid Red 51) at different temperatures (298, 313 and 328 K). A statistical physics formula was used to interpret these curves, and the parameters such as, number of adsorbed dye molecules per site (n), anchorage number (n'), receptor sites density (NM), adsorbed quantity at saturation (N asat), steric hindrance (τ), concentration at half saturation (c1/2) and molar adsorption energy (ΔE(a)) were estimated. The relation of the above mentioned parameters with the chemical structure of the dyes and temperature was evaluated and interpreted. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Interpretation of Statistical Data: The Importance of Affective Expressions

    ERIC Educational Resources Information Center

    Queiroz, Tamires; Monteiro, Carlos; Carvalho, Liliane; François, Karen

    2017-01-01

    In recent years, research on teaching and learning of statistics emphasized that the interpretation of data is a complex process that involves cognitive and technical aspects. However, it is a human activity that involves also contextual and affective aspects. This view is in line with research on affectivity and cognition. While the affective…

  6. Does Training in Table Creation Enhance Table Interpretation? A Quasi-Experimental Study with Follow-Up

    ERIC Educational Resources Information Center

    Karazsia, Bryan T.; Wong, Kendal

    2016-01-01

    Quantitative and statistical literacy are core domains in the undergraduate psychology curriculum. An important component of such literacy includes interpretation of visual aids, such as tables containing results from statistical analyses. This article presents results of a quasi-experimental study with longitudinal follow-up that tested the…

  7. Targeting Change: Assessing a Faculty Learning Community Focused on Increasing Statistics Content in Life Science Curricula

    ERIC Educational Resources Information Center

    Parker, Loran Carleton; Gleichsner, Alyssa M.; Adedokun, Omolola A.; Forney, James

    2016-01-01

    Transformation of research in all biological fields necessitates the design, analysis and, interpretation of large data sets. Preparing students with the requisite skills in experimental design, statistical analysis, and interpretation, and mathematical reasoning will require both curricular reform and faculty who are willing and able to integrate…

  8. College Students' Interpretation of Research Reports on Group Differences: The Tall-Tale Effect

    ERIC Educational Resources Information Center

    Hogan, Thomas P.; Zaboski, Brian A.; Perry, Tiffany R.

    2015-01-01

    How does the student untrained in advanced statistics interpret results of research that reports a group difference? In two studies, statistically untrained college students were presented with abstracts or professional associations' reports and asked for estimates of scores obtained by the original participants in the studies. These estimates…

  9. 48 CFR 9904.409-61 - Interpretation. [Reserved

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...] 9904.409-61 Section 9904.409-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.409-61 Interpretation. [Reserved] ...

  10. 48 CFR 9904.407-61 - Interpretation. [Reserved

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...] 9904.407-61 Section 9904.407-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.407-61 Interpretation. [Reserved] ...

  11. 48 CFR 9904.405-61 - Interpretation. [Reserved

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...] 9904.405-61 Section 9904.405-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.405-61 Interpretation. [Reserved] ...

  12. 48 CFR 9904.402-61 - Interpretation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-61 Section 9904.402-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.402-61 Interpretation. (a) 9904.402, Cost Accounting...

  13. 48 CFR 9904.410-61 - Interpretation. [Reserved

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...] 9904.410-61 Section 9904.410-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.410-61 Interpretation. [Reserved] ...

  14. 48 CFR 9904.404-61 - Interpretation. [Reserved

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...] 9904.404-61 Section 9904.404-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.404-61 Interpretation. [Reserved] ...

  15. 48 CFR 9904.408-61 - Interpretation. [Reserved

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...] 9904.408-61 Section 9904.408-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.408-61 Interpretation. [Reserved] ...

  16. Distributional properties of relative phase in bimanual coordination.

    PubMed

    James, Eric; Layne, Charles S; Newell, Karl M

    2010-10-01

    Studies of bimanual coordination have typically estimated the stability of coordination patterns through the use of the circular standard deviation of relative phase. The interpretation of this statistic depends upon the assumption of a von Mises distribution. The present study tested this assumption by examining the distributional properties of relative phase in three bimanual coordination patterns. There were significant deviations from the von Mises distribution due to differences in the kurtosis of distributions. The kurtosis depended upon the relative phase pattern performed, with leptokurtic distributions occurring in the in-phase and antiphase patterns and platykurtic distributions occurring in the 30° pattern. Thus, the distributional assumptions needed to validly and reliably use the standard deviation are not necessarily present in relative phase data though they are qualitatively consistent with the landscape properties of the intrinsic dynamics.

  17. NASA standard: Trend analysis techniques

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Descriptive and analytical techniques for NASA trend analysis applications are presented in this standard. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. This document should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend analysis is neither a precise term nor a circumscribed methodology: it generally connotes quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this document. The basic ideas needed for qualitative and quantitative assessment of trends along with relevant examples are presented.

  18. Terminology inaccuracies in the interpretation of imaging results in detection of cervical lymph node metastases in papillary thyroid cancer

    PubMed Central

    Mulla, Mubashir; Schulte, Klaus-Martin

    2012-01-01

    Cervical lymph nodes (CLNs) are the most common site of metastases in papillary thyroid cancer (PTC). Ultrasound scan (US) is the most commonly used imaging modality in the evaluation of CLNs in PTC. Computerised tomography (CT) and 18fluorodeoxyglucose positron emission tomography (18FDG PET–CT) are used less commonly. It is widely believed that the above imaging techniques should guide the surgical approach to the patient with PTC. Methods We performed a systematic review of imaging studies from the literature assessing the usefulness for the detection of metastatic CLNs in PTC. We evaluated the author's interpretation of their numeric findings specifically with regard to ‘sensitivity’ and ‘negative predictive value’ (NPV) by comparing their use against standard definitions of these terms in probabilistic statistics. Results A total of 16 studies used probabilistic terms to describe the value of US for the detection of LN metastases. Only 6 (37.5%) calculated sensitivity and NPV correctly. For CT, out of the eight studies, only 1 (12.5%) used correct terms to describe analytical results. One study looked at magnetic resonance imaging, while three assessed 18FDG PET–CT, none of which provided correct calculations for sensitivity and NPV. Conclusion Imaging provides high specificity for the detection of cervical metastases of PTC. However, sensitivity and NPV are low. The majority of studies reporting on a high sensitivity have not used key terms according to standard definitions of probabilistic statistics. Against common opinion, there is no current evidence that failure to find LN metastases on ultrasound or cross-sectional imaging can be used to guide surgical decision making. PMID:23781308

  19. Analysis of biochemical genetic data on Jewish populations: II. Results and interpretations of heterogeneity indices and distance measures with respect to standards.

    PubMed Central

    Karlin, S; Kenett, R; Bonné-Tamir, B

    1979-01-01

    A nonparametric statistical methodology is used for the analysis of biochemical frequency data observed on a series of nine Jewish and six non-Jewish populations. Two categories of statistics are used: heterogeneity indices and various distance measures with respect to a standard. The latter are more discriminating in exploiting historical, geographical and culturally relevant information. A number of partial orderings and distance relationships among the populations are determined. Our concern in this study is to analyze similarities and differences among the Jewish populations, in terms of the gene frequency distributions for a number of genetic markers. Typical questions discussed are as follows: These Jewish populations differ in certain morphological and anthropometric traits. Are there corresponding differences in biochemical genetic constitution? How can we assess the extent of heterogeneity between and within groupings? Which class of markers (blood typings or protein loci) discriminates better among the separate populations? The results are quite surprising. For example, we found the Ashkenazi, Sephardi and Iraqi Jewish populations to be consistently close in genetic constitution and distant from all the other populations, namely the Yemenite and Cochin Jews, the Arabs, and the non-Jewish German and Russian populations. We found the Polish Jewish community the most heterogeneous among all Jewish populations. The blood loci discriminate better than the protein loci. A number of possible interpretations and hypotheses for these and other results are offered. The method devised for this analysis should prove useful in studying similarities and differences for other groups of populations for which substantial biochemical polymorphic data are available. PMID:380330

  20. 30 CFR 784.200 - Interpretive rules related to General Performance Standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RECLAMATION AND OPERATION PLAN § 784.200 Interpretive rules related to General Performance Standards. The... ENFORCEMENT, DEPARTMENT OF THE INTERIOR SURFACE COAL MINING AND RECLAMATION OPERATIONS PERMITS AND COAL... Surface Mining Reclamation and Enforcement. (a) Interpretation of § 784.15: Reclamation plan: Postmining...

  1. Statistical methods and errors in family medicine articles between 2010 and 2014-Suez Canal University, Egypt: A cross-sectional study.

    PubMed

    Nour-Eldein, Hebatallah

    2016-01-01

    With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles.

  2. Statistical methods and errors in family medicine articles between 2010 and 2014-Suez Canal University, Egypt: A cross-sectional study

    PubMed Central

    Nour-Eldein, Hebatallah

    2016-01-01

    Background: With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. Objectives: To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. Methods: This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Results: Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Conclusion: Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles. PMID:27453839

  3. Statistical Association Criteria in Forensic Psychiatry–A criminological evaluation of casuistry

    PubMed Central

    Gheorghiu, V; Buda, O; Popescu, I; Trandafir, MS

    2011-01-01

    Purpose. Identification of potential shared primary psychoprophylaxis and crime prevention is measured by analyzing the rate of commitments for patients–subjects to forensic examination. Material and method. The statistic trial is a retrospective, document–based study. The statistical lot consists of 770 initial examination reports performed and completed during the whole year 2007, primarily analyzed in order to summarize the data within the National Institute of Forensic Medicine, Bucharest, Romania (INML), with one of the group variables being ‘particularities of the psychiatric patient history’, containing the items ‘forensic onset’, ‘commitments within the last year prior to the examination’ and ‘absence of commitments within the last year prior to the examination’. The method used was the Kendall bivariate correlation. For this study, the authors separately analyze only the two items regarding commitments by other correlation alternatives and by modern, elaborate statistical analyses, i.e. recording of the standard case study variables, Kendall bivariate correlation, cross tabulation, factor analysis and hierarchical cluster analysis. Results. The results are varied, from theoretically presumed clinical nosography (such as schizophrenia or manic depression), to non–presumed (conduct disorders) or unexpected behavioral acts, and therefore difficult to interpret. Conclusions. One took into consideration the features of the batch as well as the results of the previous standard correlation of the whole statistical lot. The authors emphasize the role of medical security measures that are actually applied in the therapeutic management in general and in risk and second offence management in particular, as well as the role of forensic psychiatric examinations in the detection of certain aspects related to the monitoring of mental patients. PMID:21505571

  4. Coordinate based random effect size meta-analysis of neuroimaging studies.

    PubMed

    Tench, C R; Tanasescu, Radu; Constantinescu, C S; Auer, D P; Cottam, W J

    2017-06-01

    Low power in neuroimaging studies can make them difficult to interpret, and Coordinate based meta-analysis (CBMA) may go some way to mitigating this issue. CBMA has been used in many analyses to detect where published functional MRI or voxel-based morphometry studies testing similar hypotheses report significant summary results (coordinates) consistently. Only the reported coordinates and possibly t statistics are analysed, and statistical significance of clusters is determined by coordinate density. Here a method of performing coordinate based random effect size meta-analysis and meta-regression is introduced. The algorithm (ClusterZ) analyses both coordinates and reported t statistic or Z score, standardised by the number of subjects. Statistical significance is determined not by coordinate density, but by a random effects meta-analyses of reported effects performed cluster-wise using standard statistical methods and taking account of censoring inherent in the published summary results. Type 1 error control is achieved using the false cluster discovery rate (FCDR), which is based on the false discovery rate. This controls both the family wise error rate under the null hypothesis that coordinates are randomly drawn from a standard stereotaxic space, and the proportion of significant clusters that are expected under the null. Such control is necessary to avoid propagating and even amplifying the very issues motivating the meta-analysis in the first place. ClusterZ is demonstrated on both numerically simulated data and on real data from reports of grey matter loss in multiple sclerosis (MS) and syndromes suggestive of MS, and of painful stimulus in healthy controls. The software implementation is available to download and use freely. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. 48 CFR 9901.305 - Requirements for standards and interpretive rulings.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... promulgation of cost accounting standards and interpretations thereof, the Board shall: (a) Take into account, after consultation and discussion with the Comptroller General, professional accounting organizations... ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET...

  6. 48 CFR 9901.305 - Requirements for standards and interpretive rulings.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... promulgation of cost accounting standards and interpretations thereof, the Board shall: (a) Take into account, after consultation and discussion with the Comptroller General, professional accounting organizations... ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET...

  7. 48 CFR 9901.305 - Requirements for standards and interpretive rulings.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... promulgation of cost accounting standards and interpretations thereof, the Board shall: (a) Take into account, after consultation and discussion with the Comptroller General, professional accounting organizations... ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET...

  8. 48 CFR 9901.305 - Requirements for standards and interpretive rulings.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... promulgation of cost accounting standards and interpretations thereof, the Board shall: (a) Take into account, after consultation and discussion with the Comptroller General, professional accounting organizations... ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET...

  9. 48 CFR 9901.305 - Requirements for standards and interpretive rulings.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... promulgation of cost accounting standards and interpretations thereof, the Board shall: (a) Take into account, after consultation and discussion with the Comptroller General, professional accounting organizations... ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET...

  10. 48 CFR 9904.403-61 - Interpretation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-61 Section 9904.403-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.403-61 Interpretation. (a) Questions have arisen as to...

  11. Standardizing Interpretive Training to Create a More Meaningful Visitor Experience

    ERIC Educational Resources Information Center

    Carr, Rob

    2016-01-01

    Implementing a standardized interpretive training and mentoring program across multiple departments has helped created a shared language that staff and volunteers use to collaborate and evaluate interpretive programs and products. This has led to more efficient and effective training and measurable improvements in the quality of the visitor's…

  12. A Practical Guide to Check the Consistency of Item Response Patterns in Clinical Research Through Person-Fit Statistics: Examples and a Computer Program.

    PubMed

    Meijer, Rob R; Niessen, A Susan M; Tendeiro, Jorge N

    2016-02-01

    Although there are many studies devoted to person-fit statistics to detect inconsistent item score patterns, most studies are difficult to understand for nonspecialists. The aim of this tutorial is to explain the principles of these statistics for researchers and clinicians who are interested in applying these statistics. In particular, we first explain how invalid test scores can be detected using person-fit statistics; second, we provide the reader practical examples of existing studies that used person-fit statistics to detect and to interpret inconsistent item score patterns; and third, we discuss a new R-package that can be used to identify and interpret inconsistent score patterns. © The Author(s) 2015.

  13. Uniform quantized electron gas

    NASA Astrophysics Data System (ADS)

    Høye, Johan S.; Lomba, Enrique

    2016-10-01

    In this work we study the correlation energy of the quantized electron gas of uniform density at temperature T  =  0. To do so we utilize methods from classical statistical mechanics. The basis for this is the Feynman path integral for the partition function of quantized systems. With this representation the quantum mechanical problem can be interpreted as, and is equivalent to, a classical polymer problem in four dimensions where the fourth dimension is imaginary time. Thus methods, results, and properties obtained in the statistical mechanics of classical fluids can be utilized. From this viewpoint we recover the well known RPA (random phase approximation). Then to improve it we modify the RPA by requiring the corresponding correlation function to be such that electrons with equal spins can not be on the same position. Numerical evaluations are compared with well known results of a standard parameterization of Monte Carlo correlation energies.

  14. Marketing of personalized cancer care on the web: an analysis of Internet websites.

    PubMed

    Gray, Stacy W; Cronin, Angel; Bair, Elizabeth; Lindeman, Neal; Viswanath, Vish; Janeway, Katherine A

    2015-05-01

    Internet marketing may accelerate the use of care based on genomic or tumor-derived data. However, online marketing may be detrimental if it endorses products of unproven benefit. We conducted an analysis of Internet websites to identify personalized cancer medicine (PCM) products and claims. A Delphi Panel categorized PCM as standard or nonstandard based on evidence of clinical utility. Fifty-five websites, sponsored by commercial entities, academic institutions, physicians, research institutes, and organizations, that marketed PCM included somatic (58%) and germline (20%) analysis, interpretive services (15%), and physicians/institutions offering personalized care (44%). Of 32 sites offering somatic analysis, 56% included specific test information (range 1-152 tests). All statistical tests were two-sided, and comparisons of website content were conducted using McNemar's test. More websites contained information about the benefits than limitations of PCM (85% vs 27%, P < .001). Websites specifying somatic analysis were statistically significantly more likely to market one or more nonstandard tests as compared with standard tests (88% vs 44%, P = .04). © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Evaluating concentration estimation errors in ELISA microarray experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; White, Amanda M.; Varnum, Susan M.

    Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less

  16. 48 CFR 9904.401-61 - Interpretation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-61 Section 9904.401-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.401-61 Interpretation. (a) 9904.401, Cost Accounting... accounting practices used in accumulating and reporting costs.” (b) In estimating the cost of direct material...

  17. 76 FR 62 - Interpretive Standards for Systemic Compensation Discrimination and Voluntary Guidelines for Self...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-03

    ... 1250-ZA00 Interpretive Standards for Systemic Compensation Discrimination and Voluntary Guidelines for... Order 11246 with respect to Systemic Compensation Discrimination (Standards) and Voluntary Guidelines... to Systemic Compensation Discrimination (Voluntary Guidelines). OFCCP is proposing to rescind the...

  18. Cancer survival: an overview of measures, uses, and interpretation.

    PubMed

    Mariotto, Angela B; Noone, Anne-Michelle; Howlader, Nadia; Cho, Hyunsoon; Keel, Gretchen E; Garshell, Jessica; Woloshin, Steven; Schwartz, Lisa M

    2014-11-01

    Survival statistics are of great interest to patients, clinicians, researchers, and policy makers. Although seemingly simple, survival can be confusing: there are many different survival measures with a plethora of names and statistical methods developed to answer different questions. This paper aims to describe and disseminate different survival measures and their interpretation in less technical language. In addition, we introduce templates to summarize cancer survival statistic organized by their specific purpose: research and policy versus prognosis and clinical decision making. Published by Oxford University Press 2014.

  19. Cancer Survival: An Overview of Measures, Uses, and Interpretation

    PubMed Central

    Noone, Anne-Michelle; Howlader, Nadia; Cho, Hyunsoon; Keel, Gretchen E.; Garshell, Jessica; Woloshin, Steven; Schwartz, Lisa M.

    2014-01-01

    Survival statistics are of great interest to patients, clinicians, researchers, and policy makers. Although seemingly simple, survival can be confusing: there are many different survival measures with a plethora of names and statistical methods developed to answer different questions. This paper aims to describe and disseminate different survival measures and their interpretation in less technical language. In addition, we introduce templates to summarize cancer survival statistic organized by their specific purpose: research and policy versus prognosis and clinical decision making. PMID:25417231

  20. Material Phase Causality or a Dynamics-Statistical Interpretation of Quantum Mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koprinkov, I. G.

    2010-11-25

    The internal phase dynamics of a quantum system interacting with an electromagnetic field is revealed in details. Theoretical and experimental evidences of a causal relation of the phase of the wave function to the dynamics of the quantum system are presented sistematically for the first time. A dynamics-statistical interpretation of the quantum mechanics is introduced.

  1. Uses and Misuses of Student Evaluations of Teaching: The Interpretation of Differences in Teaching Evaluation Means Irrespective of Statistical Information

    ERIC Educational Resources Information Center

    Boysen, Guy A.

    2015-01-01

    Student evaluations of teaching are among the most accepted and important indicators of college teachers' performance. However, faculty and administrators can overinterpret small variations in mean teaching evaluations. The current research examined the effect of including statistical information on the interpretation of teaching evaluations.…

  2. Interpretations of Boxplots: Helping Middle School Students to Think outside the Box

    ERIC Educational Resources Information Center

    Edwards, Thomas G.; Özgün-Koca, Asli; Barr, John

    2017-01-01

    Boxplots are statistical representations for organizing and displaying data that are relatively easy to create with a five-number summary. However, boxplots are not as easy to understand, interpret, or connect with other statistical representations of the same data. We worked at two different schools with 259 middle school students who constructed…

  3. Similar range of motion and function after resurfacing large–head or standard total hip arthroplasty

    PubMed Central

    2013-01-01

    Background and purpose Large–size hip articulations may improve range of motion (ROM) and function compared to a 28–mm THA, and the low risk of dislocation allows the patients more activity postoperatively. On the other hand, the greater extent of surgery for resurfacing hip arthroplasty (RHA) could impair rehabilitation. We investigated the effect of head size and surgical procedure on postoperative rehabilitation in a randomized clinical trial (RCT). Methods We followed randomized groups of RHAs, large–head THAs and standard THAs at 2 months, 6 months, 1 and 2 years postoperatively, recording clinical rehabilitation parameters. Results Large articulations increased the mean total range of motion by 13° during the first 6 postoperative months. The increase was not statistically significant and was transient. The 2–year total ROM (SD) for RHA, standard THA, and large–head THA was 221° (35), 232° (36), and 225° (30) respectively, but the differences were not statistically significant. The 3 groups were similar regarding Harris hip score, UCLA activity score, step rate, and sick leave. Interpretation Head size had no influence on range of motion. The lack of restriction allowed for large articulations did not improve the clinical and patient–perceived outcomes. The more extensive surgical procedure of RHA did not impair the rehabilitation. This project is registered at ClinicalTrials.gov under # NCT01113762. PMID:23530872

  4. High Impact = High Statistical Standards? Not Necessarily So

    PubMed Central

    Tressoldi, Patrizio E.; Giofré, David; Sella, Francesco; Cumming, Geoff

    2013-01-01

    What are the statistical practices of articles published in journals with a high impact factor? Are there differences compared with articles published in journals with a somewhat lower impact factor that have adopted editorial policies to reduce the impact of limitations of Null Hypothesis Significance Testing? To investigate these questions, the current study analyzed all articles related to psychological, neuropsychological and medical issues, published in 2011 in four journals with high impact factors: Science, Nature, The New England Journal of Medicine and The Lancet, and three journals with relatively lower impact factors: Neuropsychology, Journal of Experimental Psychology-Applied and the American Journal of Public Health. Results show that Null Hypothesis Significance Testing without any use of confidence intervals, effect size, prospective power and model estimation, is the prevalent statistical practice used in articles published in Nature, 89%, followed by articles published in Science, 42%. By contrast, in all other journals, both with high and lower impact factors, most articles report confidence intervals and/or effect size measures. We interpreted these differences as consequences of the editorial policies adopted by the journal editors, which are probably the most effective means to improve the statistical practices in journals with high or low impact factors. PMID:23418533

  5. High impact  =  high statistical standards? Not necessarily so.

    PubMed

    Tressoldi, Patrizio E; Giofré, David; Sella, Francesco; Cumming, Geoff

    2013-01-01

    What are the statistical practices of articles published in journals with a high impact factor? Are there differences compared with articles published in journals with a somewhat lower impact factor that have adopted editorial policies to reduce the impact of limitations of Null Hypothesis Significance Testing? To investigate these questions, the current study analyzed all articles related to psychological, neuropsychological and medical issues, published in 2011 in four journals with high impact factors: Science, Nature, The New England Journal of Medicine and The Lancet, and three journals with relatively lower impact factors: Neuropsychology, Journal of Experimental Psychology-Applied and the American Journal of Public Health. Results show that Null Hypothesis Significance Testing without any use of confidence intervals, effect size, prospective power and model estimation, is the prevalent statistical practice used in articles published in Nature, 89%, followed by articles published in Science, 42%. By contrast, in all other journals, both with high and lower impact factors, most articles report confidence intervals and/or effect size measures. We interpreted these differences as consequences of the editorial policies adopted by the journal editors, which are probably the most effective means to improve the statistical practices in journals with high or low impact factors.

  6. Phylogeography Takes a Relaxed Random Walk in Continuous Space and Time

    PubMed Central

    Lemey, Philippe; Rambaut, Andrew; Welch, John J.; Suchard, Marc A.

    2010-01-01

    Research aimed at understanding the geographic context of evolutionary histories is burgeoning across biological disciplines. Recent endeavors attempt to interpret contemporaneous genetic variation in the light of increasingly detailed geographical and environmental observations. Such interest has promoted the development of phylogeographic inference techniques that explicitly aim to integrate such heterogeneous data. One promising development involves reconstructing phylogeographic history on a continuous landscape. Here, we present a Bayesian statistical approach to infer continuous phylogeographic diffusion using random walk models while simultaneously reconstructing the evolutionary history in time from molecular sequence data. Moreover, by accommodating branch-specific variation in dispersal rates, we relax the most restrictive assumption of the standard Brownian diffusion process and demonstrate increased statistical efficiency in spatial reconstructions of overdispersed random walks by analyzing both simulated and real viral genetic data. We further illustrate how drawing inference about summary statistics from a fully specified stochastic process over both sequence evolution and spatial movement reveals important characteristics of a rabies epidemic. Together with recent advances in discrete phylogeographic inference, the continuous model developments furnish a flexible statistical framework for biogeographical reconstructions that is easily expanded upon to accommodate various landscape genetic features. PMID:20203288

  7. Maximum Likelihood Estimation of Spectra Information from Multiple Independent Astrophysics Data Sets

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)

    2002-01-01

    The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.

  8. The Impact of an Interactive Statistics Module on Novices' Development of Scientific Process Skills and Attitudes in a First-Semester Research Foundations Course.

    PubMed

    Marsan, Lynnsay A; D'Arcy, Christina E; Olimpo, Jeffrey T

    2016-12-01

    Evidence suggests that incorporating quantitative reasoning exercises into existent curricular frameworks within the science, technology, engineering, and mathematics (STEM) disciplines is essential for novices' development of conceptual understanding and process skills in these domains. Despite this being the case, such studies acknowledge that students often experience difficulty in applying mathematics in the context of scientific problems. To address this concern, the present study sought to explore the impact of active demonstrations and critical reading exercises on novices' comprehension of basic statistical concepts, including hypothesis testing, experimental design, and interpretation of research findings. Students first engaged in a highly interactive height activity that served to intuitively illustrate normal distribution, mean, standard deviation, and sample selection criteria. To enforce practical applications of standard deviation and p -value, student teams were subsequently assigned a figure from a peer-reviewed primary research article and instructed to evaluate the trustworthiness of the data. At the conclusion of this exercise, students presented their evaluations to the class for open discussion and commentary. Quantitative assessment of pre- and post-module survey data indicated a statistically significant increase both in students' scientific reasoning and process skills and in their self-reported confidence in understanding the statistical concepts presented in the module. Furthermore, data indicated that the majority of students (>85%) found the module both interesting and helpful in nature. Future studies will seek to develop additional, novel exercises within this area and to evaluate the impact of such modules across a variety of STEM and non-STEM contexts.

  9. The Impact of an Interactive Statistics Module on Novices’ Development of Scientific Process Skills and Attitudes in a First-Semester Research Foundations Course†

    PubMed Central

    Marsan, Lynnsay A.; D’Arcy, Christina E.; Olimpo, Jeffrey T.

    2016-01-01

    Evidence suggests that incorporating quantitative reasoning exercises into existent curricular frameworks within the science, technology, engineering, and mathematics (STEM) disciplines is essential for novices’ development of conceptual understanding and process skills in these domains. Despite this being the case, such studies acknowledge that students often experience difficulty in applying mathematics in the context of scientific problems. To address this concern, the present study sought to explore the impact of active demonstrations and critical reading exercises on novices’ comprehension of basic statistical concepts, including hypothesis testing, experimental design, and interpretation of research findings. Students first engaged in a highly interactive height activity that served to intuitively illustrate normal distribution, mean, standard deviation, and sample selection criteria. To enforce practical applications of standard deviation and p-value, student teams were subsequently assigned a figure from a peer-reviewed primary research article and instructed to evaluate the trustworthiness of the data. At the conclusion of this exercise, students presented their evaluations to the class for open discussion and commentary. Quantitative assessment of pre- and post-module survey data indicated a statistically significant increase both in students’ scientific reasoning and process skills and in their self-reported confidence in understanding the statistical concepts presented in the module. Furthermore, data indicated that the majority of students (>85%) found the module both interesting and helpful in nature. Future studies will seek to develop additional, novel exercises within this area and to evaluate the impact of such modules across a variety of STEM and non-STEM contexts. PMID:28101271

  10. Minimum Information about a Genotyping Experiment (MIGEN)

    PubMed Central

    Huang, Jie; Mirel, Daniel; Pugh, Elizabeth; Xing, Chao; Robinson, Peter N.; Pertsemlidis, Alexander; Ding, LiangHao; Kozlitina, Julia; Maher, Joseph; Rios, Jonathan; Story, Michael; Marthandan, Nishanth; Scheuermann, Richard H.

    2011-01-01

    Genotyping experiments are widely used in clinical and basic research laboratories to identify associations between genetic variations and normal/abnormal phenotypes. Genotyping assay techniques vary from single genomic regions that are interrogated using PCR reactions to high throughput assays examining genome-wide sequence and structural variation. The resulting genotype data may include millions of markers of thousands of individuals, requiring various statistical, modeling or other data analysis methodologies to interpret the results. To date, there are no standards for reporting genotyping experiments. Here we present the Minimum Information about a Genotyping Experiment (MIGen) standard, defining the minimum information required for reporting genotyping experiments. MIGen standard covers experimental design, subject description, genotyping procedure, quality control and data analysis. MIGen is a registered project under MIBBI (Minimum Information for Biological and Biomedical Investigations) and is being developed by an interdisciplinary group of experts in basic biomedical science, clinical science, biostatistics and bioinformatics. To accommodate the wide variety of techniques and methodologies applied in current and future genotyping experiment, MIGen leverages foundational concepts from the Ontology for Biomedical Investigations (OBI) for the description of the various types of planned processes and implements a hierarchical document structure. The adoption of MIGen by the research community will facilitate consistent genotyping data interpretation and independent data validation. MIGen can also serve as a framework for the development of data models for capturing and storing genotyping results and experiment metadata in a structured way, to facilitate the exchange of metadata. PMID:22180825

  11. Weighing Evidence "Steampunk" Style via the Meta-Analyser.

    PubMed

    Bowden, Jack; Jackson, Chris

    2016-10-01

    The funnel plot is a graphical visualization of summary data estimates from a meta-analysis, and is a useful tool for detecting departures from the standard modeling assumptions. Although perhaps not widely appreciated, a simple extension of the funnel plot can help to facilitate an intuitive interpretation of the mathematics underlying a meta-analysis at a more fundamental level, by equating it to determining the center of mass of a physical system. We used this analogy to explain the concepts of weighing evidence and of biased evidence to a young audience at the Cambridge Science Festival, without recourse to precise definitions or statistical formulas and with a little help from Sherlock Holmes! Following on from the science fair, we have developed an interactive web-application (named the Meta-Analyser) to bring these ideas to a wider audience. We envisage that our application will be a useful tool for researchers when interpreting their data. First, to facilitate a simple understanding of fixed and random effects modeling approaches; second, to assess the importance of outliers; and third, to show the impact of adjusting for small study bias. This final aim is realized by introducing a novel graphical interpretation of the well-known method of Egger regression.

  12. 44 CFR 61.14 - Standard Flood Insurance Policy Interpretations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.14 Standard Flood Insurance Policy Interpretations. (a... 44 Emergency Management and Assistance 1 2012-10-01 2011-10-01 true Standard Flood Insurance...

  13. 44 CFR 61.14 - Standard Flood Insurance Policy Interpretations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.14 Standard Flood Insurance Policy Interpretations. (a... 44 Emergency Management and Assistance 1 2013-10-01 2013-10-01 false Standard Flood Insurance...

  14. 44 CFR 61.14 - Standard Flood Insurance Policy Interpretations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.14 Standard Flood Insurance Policy Interpretations. (a... 44 Emergency Management and Assistance 1 2014-10-01 2014-10-01 false Standard Flood Insurance...

  15. 44 CFR 61.14 - Standard Flood Insurance Policy Interpretations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.14 Standard Flood Insurance Policy Interpretations. (a... 44 Emergency Management and Assistance 1 2011-10-01 2011-10-01 false Standard Flood Insurance...

  16. 44 CFR 61.14 - Standard Flood Insurance Policy Interpretations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.14 Standard Flood Insurance Policy Interpretations. (a... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Standard Flood Insurance...

  17. 20 CFR 634.4 - Statistical standards.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Statistical standards. 634.4 Section 634.4... System § 634.4 Statistical standards. Recipients shall agree to provide required data following the statistical standards prescribed by the Bureau of Labor Statistics for cooperative statistical programs. ...

  18. 20 CFR 634.4 - Statistical standards.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Statistical standards. 634.4 Section 634.4... System § 634.4 Statistical standards. Recipients shall agree to provide required data following the statistical standards prescribed by the Bureau of Labor Statistics for cooperative statistical programs. ...

  19. Impact of Immediate Interpretation of Screening Tomosynthesis Mammography on Performance Metrics.

    PubMed

    Winkler, Nicole S; Freer, Phoebe; Anzai, Yoshimi; Hu, Nan; Stein, Matthew

    2018-05-07

    This study aimed to compare performance metrics for immediate and delayed batch interpretation of screening tomosynthesis mammograms. This HIPAA compliant study was approved by institutional review board with a waiver of consent. A retrospective analysis of screening performance metrics for tomosynthesis mammograms interpreted in 2015 when mammograms were read immediately was compared to historical controls from 2013 to 2014 when mammograms were batch interpreted after the patient had departed. A total of 5518 screening tomosynthesis mammograms (n = 1212 for batch interpretation and n = 4306 for immediate interpretation) were evaluated. The larger sample size for the latter group reflects a group practice shift to performing tomosynthesis for the majority of patients. Age, breast density, comparison examinations, and high-risk status were compared. An asymptotic proportion test and multivariable analysis were used to compare performance metrics. There was no statistically significant difference in recall or cancer detection rates for the batch interpretation group compared to immediate interpretation group with respective recall rate of 6.5% vs 5.3% = +1.2% (95% confidence interval -0.3 to 2.7%; P = .101) and cancer detection rate of 6.6 vs 7.2 per thousand = -0.6 (95% confidence interval -5.9 to 4.6; P = .825). There was no statistically significant difference in positive predictive values (PPVs) including PPV1 (screening recall), PPV2 (biopsy recommendation), or PPV 3 (biopsy performed) with batch interpretation (10.1%, 42.1%, and 40.0%, respectively) and immediate interpretation (13.6%, 39.2%, and 39.7%, respectively). After adjusting for age, breast density, high-risk status, and comparison mammogram, there was no difference in the odds of being recalled or cancer detection between the two groups. There is no statistically significant difference in interpretation performance metrics for screening tomosynthesis mammograms interpreted immediately compared to those interpreted in a delayed fashion. Copyright © 2018. Published by Elsevier Inc.

  20. Court Interpreters and Translators: Developing Ethical and Professional Standards.

    ERIC Educational Resources Information Center

    Funston, Richard

    Changing needs in the courtroom have raised questions about the need for standards in court interpreter qualifications. In California, no formal training or familiarity with the legal system is required for certification, which is done entirely by language testing. The fact that often court interpreters are officers of the court may be…

  1. ΛCDM model with dissipative nonextensive viscous dark matter

    NASA Astrophysics Data System (ADS)

    Gimenes, H. S.; Viswanathan, G. M.; Silva, R.

    2018-03-01

    Many models in cosmology typically assume the standard bulk viscosity. We study an alternative interpretation for the origin of the bulk viscosity. Using nonadditive statistics proposed by Tsallis, we propose a bulk viscosity component that can only exist by a nonextensive effect through the nonextensive/dissipative correspondence (NexDC). In this paper, we consider a ΛCDM model for a flat universe with a dissipative nonextensive viscous dark matter component, following the Eckart theory of bulk viscosity, without any perturbative approach. In order to analyze cosmological constraints, we use one of the most recent observations of Type Ia Supernova, baryon acoustic oscillations and cosmic microwave background data.

  2. An experimental verification of laser-velocimeter sampling bias and its correction

    NASA Technical Reports Server (NTRS)

    Johnson, D. A.; Modarress, D.; Owen, F. K.

    1982-01-01

    The existence of 'sampling bias' in individual-realization laser velocimeter measurements is experimentally verified and shown to be independent of sample rate. The experiments were performed in a simple two-stream mixing shear flow with the standard for comparison being laser-velocimeter results obtained under continuous-wave conditions. It is also demonstrated that the errors resulting from sampling bias can be removed by a proper interpretation of the sampling statistics. In addition, data obtained in a shock-induced separated flow and in the near-wake of airfoils are presented, both bias-corrected and uncorrected, to illustrate the effects of sampling bias in the extreme.

  3. Interpretation of correlations in clinical research.

    PubMed

    Hung, Man; Bounsanga, Jerry; Voss, Maren Wright

    2017-11-01

    Critically analyzing research is a key skill in evidence-based practice and requires knowledge of research methods, results interpretation, and applications, all of which rely on a foundation based in statistics. Evidence-based practice makes high demands on trained medical professionals to interpret an ever-expanding array of research evidence. As clinical training emphasizes medical care rather than statistics, it is useful to review the basics of statistical methods and what they mean for interpreting clinical studies. We reviewed the basic concepts of correlational associations, violations of normality, unobserved variable bias, sample size, and alpha inflation. The foundations of causal inference were discussed and sound statistical analyses were examined. We discuss four ways in which correlational analysis is misused, including causal inference overreach, over-reliance on significance, alpha inflation, and sample size bias. Recent published studies in the medical field provide evidence of causal assertion overreach drawn from correlational findings. The findings present a primer on the assumptions and nature of correlational methods of analysis and urge clinicians to exercise appropriate caution as they critically analyze the evidence before them and evaluate evidence that supports practice. Critically analyzing new evidence requires statistical knowledge in addition to clinical knowledge. Studies can overstate relationships, expressing causal assertions when only correlational evidence is available. Failure to account for the effect of sample size in the analyses tends to overstate the importance of predictive variables. It is important not to overemphasize the statistical significance without consideration of effect size and whether differences could be considered clinically meaningful.

  4. Exclusion probabilities and likelihood ratios with applications to kinship problems.

    PubMed

    Slooten, Klaas-Jan; Egeland, Thore

    2014-05-01

    In forensic genetics, DNA profiles are compared in order to make inferences, paternity cases being a standard example. The statistical evidence can be summarized and reported in several ways. For example, in a paternity case, the likelihood ratio (LR) and the probability of not excluding a random man as father (RMNE) are two common summary statistics. There has been a long debate on the merits of the two statistics, also in the context of DNA mixture interpretation, and no general consensus has been reached. In this paper, we show that the RMNE is a certain weighted average of inverse likelihood ratios. This is true in any forensic context. We show that the likelihood ratio in favor of the correct hypothesis is, in expectation, bigger than the reciprocal of the RMNE probability. However, with the exception of pathological cases, it is also possible to obtain smaller likelihood ratios. We illustrate this result for paternity cases. Moreover, some theoretical properties of the likelihood ratio for a large class of general pairwise kinship cases, including expected value and variance, are derived. The practical implications of the findings are discussed and exemplified.

  5. Approaches for estimating minimal clinically important differences in systemic lupus erythematosus.

    PubMed

    Rai, Sharan K; Yazdany, Jinoos; Fortin, Paul R; Aviña-Zubieta, J Antonio

    2015-06-03

    A minimal clinically important difference (MCID) is an important concept used to determine whether a medical intervention improves perceived outcomes in patients. Prior to the introduction of the concept in 1989, studies focused primarily on statistical significance. As most recent clinical trials in systemic lupus erythematosus (SLE) have failed to show significant effects, determining a clinically relevant threshold for outcome scores (that is, the MCID) of existing instruments may be critical for conducting and interpreting meaningful clinical trials as well as for facilitating the establishment of treatment recommendations for patients. To that effect, methods to determine the MCID can be divided into two well-defined categories: distribution-based and anchor-based approaches. Distribution-based approaches are based on statistical characteristics of the obtained samples. There are various methods within the distribution-based approach, including the standard error of measurement, the standard deviation, the effect size, the minimal detectable change, the reliable change index, and the standardized response mean. Anchor-based approaches compare the change in a patient-reported outcome to a second, external measure of change (that is, one that is more clearly understood, such as a global assessment), which serves as the anchor. Finally, the Delphi technique can be applied as an adjunct to defining a clinically important difference. Despite an abundance of methods reported in the literature, little work in MCID estimation has been done in the context of SLE. As the MCID can help determine the effect of a given therapy on a patient and add meaning to statistical inferences made in clinical research, we believe there ought to be renewed focus on this area. Here, we provide an update on the use of MCIDs in clinical research, review some of the work done in this area in SLE, and propose an agenda for future research.

  6. Descriptive data analysis.

    PubMed

    Thompson, Cheryl Bagley

    2009-01-01

    This 13th article of the Basics of Research series is first in a short series on statistical analysis. These articles will discuss creating your statistical analysis plan, levels of measurement, descriptive statistics, probability theory, inferential statistics, and general considerations for interpretation of the results of a statistical analysis.

  7. 40 CFR Appendix H to Part 50 - Interpretation of the 1-Hour Primary and Secondary National Ambient Air Quality Standards for Ozone

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... and Secondary National Ambient Air Quality Standards for Ozone H Appendix H to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General This appendix explains how to... associated examples are contained in the “Guideline for Interpretation of Ozone Air Quality Standards.” For...

  8. 40 CFR Appendix H to Part 50 - Interpretation of the 1-Hour Primary and Secondary National Ambient Air Quality Standards for Ozone

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... and Secondary National Ambient Air Quality Standards for Ozone H Appendix H to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General This appendix explains how to... associated examples are contained in the “Guideline for Interpretation of Ozone Air Quality Standards.” For...

  9. 40 CFR Appendix H to Part 50 - Interpretation of the 1-Hour Primary and Secondary National Ambient Air Quality Standards for Ozone

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... and Secondary National Ambient Air Quality Standards for Ozone H Appendix H to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General This appendix explains how to... associated examples are contained in the “Guideline for Interpretation of Ozone Air Quality Standards.” For...

  10. 40 CFR Appendix H to Part 50 - Interpretation of the 1-Hour Primary and Secondary National Ambient Air Quality Standards for Ozone

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... and Secondary National Ambient Air Quality Standards for Ozone H Appendix H to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General This appendix explains how to... associated examples are contained in the “Guideline for Interpretation of Ozone Air Quality Standards.” For...

  11. Interpreting Association from Graphical Displays

    ERIC Educational Resources Information Center

    Fitzallen, Noleine

    2016-01-01

    Research that has explored students' interpretations of graphical representations has not extended to include how students apply understanding of particular statistical concepts related to one graphical representation to interpret different representations. This paper reports on the way in which students' understanding of covariation, evidenced…

  12. How Engineering Standards Are Interpreted and Translated for Middle School

    ERIC Educational Resources Information Center

    Judson, Eugene; Ernzen, John; Krause, Stephen; Middleton, James A.; Culbertson, Robert J.

    2016-01-01

    In this exploratory study we examined the alignment of Next Generation Science Standards (NGSS) middle school engineering design standards with lesson ideas from middle school teachers, science education faculty, and engineering faculty (4-6 members per group). Respondents were prompted to provide plain language interpretations of two middle…

  13. Search for a dark photon in e(+)e(-) collisions at BABAR.

    PubMed

    Lees, J P; Poireau, V; Tisserand, V; Grauges, E; Palano, A; Eigen, G; Stugu, B; Brown, D N; Feng, M; Kerth, L T; Kolomensky, Yu G; Lee, M J; Lynch, G; Koch, H; Schroeder, T; Hearty, C; Mattison, T S; McKenna, J A; So, R Y; Khan, A; Blinov, V E; Buzykaev, A R; Druzhinin, V P; Golubev, V B; Kravchenko, E A; Onuchin, A P; Serednyakov, S I; Skovpen, Yu I; Solodov, E P; Todyshev, K Yu; Lankford, A J; Mandelkern, M; Dey, B; Gary, J W; Long, O; Campagnari, C; Franco Sevilla, M; Hong, T M; Kovalskyi, D; Richman, J D; West, C A; Eisner, A M; Lockman, W S; Panduro Vazquez, W; Schumm, B A; Seiden, A; Chao, D S; Cheng, C H; Echenard, B; Flood, K T; Hitlin, D G; Miyashita, T S; Ongmongkolkul, P; Porter, F C; Andreassen, R; Huard, Z; Meadows, B T; Pushpawela, B G; Sokoloff, M D; Sun, L; Bloom, P C; Ford, W T; Gaz, A; Smith, J G; Wagner, S R; Ayad, R; Toki, W H; Spaan, B; Bernard, D; Verderi, M; Playfer, S; Bettoni, D; Bozzi, C; Calabrese, R; Cibinetto, G; Fioravanti, E; Garzia, I; Luppi, E; Piemontese, L; Santoro, V; Calcaterra, A; de Sangro, R; Finocchiaro, G; Martellotti, S; Patteri, P; Peruzzi, I M; Piccolo, M; Rama, M; Zallo, A; Contri, R; Lo Vetere, M; Monge, M R; Passaggio, S; Patrignani, C; Robutti, E; Bhuyan, B; Prasad, V; Adametz, A; Uwer, U; Lacker, H M; Dauncey, P D; Mallik, U; Chen, C; Cochran, J; Prell, S; Ahmed, H; Gritsan, A V; Arnaud, N; Davier, M; Derkach, D; Grosdidier, G; Le Diberder, F; Lutz, A M; Malaescu, B; Roudeau, P; Stocchi, A; Wormser, G; Lange, D J; Wright, D M; Coleman, J P; Fry, J R; Gabathuler, E; Hutchcroft, D E; Payne, D J; Touramanis, C; Bevan, A J; Di Lodovico, F; Sacco, R; Cowan, G; Bougher, J; Brown, D N; Davis, C L; Denig, A G; Fritsch, M; Gradl, W; Griessinger, K; Hafner, A; Schubert, K R; Barlow, R J; Lafferty, G D; Cenci, R; Hamilton, B; Jawahery, A; Roberts, D A; Cowan, R; Sciolla, G; Cheaib, R; Patel, P M; Robertson, S H; Neri, N; Palombo, F; Cremaldi, L; Godang, R; Sonnek, P; Summers, D J; Simard, M; Taras, P; De Nardo, G; Onorato, G; Sciacca, C; Martinelli, M; Raven, G; Jessop, C P; LoSecco, J M; Honscheid, K; Kass, R; Feltresi, E; Margoni, M; Morandin, M; Posocco, M; Rotondo, M; Simi, G; Simonetto, F; Stroili, R; Akar, S; Ben-Haim, E; Bomben, M; Bonneaud, G R; Briand, H; Calderini, G; Chauveau, J; Leruste, Ph; Marchiori, G; Ocariz, J; Biasini, M; Manoni, E; Pacetti, S; Rossi, A; Angelini, C; Batignani, G; Bettarini, S; Carpinelli, M; Casarosa, G; Cervelli, A; Chrzaszcz, M; Forti, F; Giorgi, M A; Lusiani, A; Oberhof, B; Paoloni, E; Perez, A; Rizzo, G; Walsh, J J; Lopes Pegna, D; Olsen, J; Smith, A J S; Faccini, R; Ferrarotto, F; Ferroni, F; Gaspero, M; Li Gioi, L; Pilloni, A; Piredda, G; Bünger, C; Dittrich, S; Grünberg, O; Hartmann, T; Hess, M; Leddig, T; Voß, C; Waldi, R; Adye, T; Olaiya, E O; Wilson, F F; Emery, S; Vasseur, G; Anulli, F; Aston, D; Bard, D J; Cartaro, C; Convery, M R; Dorfan, J; Dubois-Felsmann, G P; Dunwoodie, W; Ebert, M; Field, R C; Fulsom, B G; Graham, M T; Hast, C; Innes, W R; Kim, P; Leith, D W G S; Lewis, P; Lindemann, D; Luitz, S; Luth, V; Lynch, H L; MacFarlane, D B; Muller, D R; Neal, H; Perl, M; Pulliam, T; Ratcliff, B N; Roodman, A; Salnikov, A A; Schindler, R H; Snyder, A; Su, D; Sullivan, M K; Va'vra, J; Wisniewski, W J; Wulsin, H W; Purohit, M V; White, R M; Wilson, J R; Randle-Conde, A; Sekula, S J; Bellis, M; Burchat, P R; Puccio, E M T; Alam, M S; Ernst, J A; Gorodeisky, R; Guttman, N; Peimer, D R; Soffer, A; Spanier, S M; Ritchie, J L; Ruland, A M; Schwitters, R F; Wray, B C; Izen, J M; Lou, X C; Bianchi, F; De Mori, F; Filippi, A; Gamba, D; Lanceri, L; Vitale, L; Martinez-Vidal, F; Oyanguren, A; Villanueva-Perez, P; Albert, J; Banerjee, Sw; Beaulieu, A; Bernlochner, F U; Choi, H H F; King, G J; Kowalewski, R; Lewczuk, M J; Lueck, T; Nugent, I M; Roney, J M; Sobie, R J; Tasneem, N; Gershon, T J; Harrison, P F; Latham, T E; Band, H R; Dasu, S; Pan, Y; Prepost, R; Wu, S L

    2014-11-14

    Dark sectors charged under a new Abelian interaction have recently received much attention in the context of dark matter models. These models introduce a light new mediator, the so-called dark photon (A^{'}), connecting the dark sector to the standard model. We present a search for a dark photon in the reaction e^{+}e^{-}→γA^{'}, A^{'}→e^{+}e^{-}, μ^{+}μ^{-} using 514  fb^{-1} of data collected with the BABAR detector. We observe no statistically significant deviations from the standard model predictions, and we set 90% confidence level upper limits on the mixing strength between the photon and dark photon at the level of 10^{-4}-10^{-3} for dark photon masses in the range 0.02-10.2  GeV. We further constrain the range of the parameter space favored by interpretations of the discrepancy between the calculated and measured anomalous magnetic moment of the muon.

  14. Ferrets as Models for Influenza Virus Transmission Studies and Pandemic Risk Assessments

    PubMed Central

    Barclay, Wendy; Barr, Ian; Fouchier, Ron A.M.; Matsuyama, Ryota; Nishiura, Hiroshi; Peiris, Malik; Russell, Charles J.; Subbarao, Kanta; Zhu, Huachen

    2018-01-01

    The ferret transmission model is extensively used to assess the pandemic potential of emerging influenza viruses, yet experimental conditions and reported results vary among laboratories. Such variation can be a critical consideration when contextualizing results from independent risk-assessment studies of novel and emerging influenza viruses. To streamline interpretation of data generated in different laboratories, we provide a consensus on experimental parameters that define risk-assessment experiments of influenza virus transmissibility, including disclosure of variables known or suspected to contribute to experimental variability in this model, and advocate adoption of more standardized practices. We also discuss current limitations of the ferret transmission model and highlight continued refinements and advances to this model ongoing in laboratories. Understanding, disclosing, and standardizing the critical parameters of ferret transmission studies will improve the comparability and reproducibility of pandemic influenza risk assessment and increase the statistical power and, perhaps, accuracy of this model. PMID:29774862

  15. MO-F-CAMPUS-I-01: Accuracy of Radiologists Interpretation of Mammographic Breast Density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vedantham, S; Shi, L; Karellas, A

    2015-06-15

    Purpose: Several commercial and non-commercial software and techniques are available for determining breast density from mammograms. However, where mandated by law the breast density information communicated to the subject/patient is based on radiologist’s interpretation of breast density from mammograms. Several studies have reported on the concordance among radiologists in interpreting mammographic breast density. In this work, we investigated the accuracy of radiologist’s interpretation of breast density. Methods: Volumetric breast density (VBD) determined from 134 unilateral dedicated breast CT scans from 134 subjects was considered the truth. An MQSA-qualified study radiologist with more than 20 years of breast imaging experience reviewedmore » the DICOM “for presentation” standard 2-view mammograms of the corresponding breasts and assigned BIRADS breast density categories. For statistical analysis, the breast density categories were dichotomized in two ways; fatty vs. dense breasts where “fatty” corresponds to BIRADS breast density categories A/B, and “dense” corresponds to BIRADS breast density categories C/D, and extremely dense vs. fatty to heterogeneously dense breasts, where extremely dense corresponds to BIRADS breast density category D and BIRADS breast density categories A through C were grouped as fatty to heterogeneously dense breasts. Logistic regression models (SAS 9.3) were used to determine the association between radiologist’s interpretation of breast density and VBD from breast CT, from which the area under the ROC (AUC) was determined. Results: Both logistic regression models were statistically significant (Likelihood Ratio test, p<0.0001). The accuracy (AUC) of the study radiologist for classification of fatty vs. dense breasts was 88.4% (95% CI: 83–94%) and for classification of extremely dense breast was 94.3% (95% CI: 90–98%). Conclusion: The accuracy of the radiologist in classifying dense and extremely dense breasts is high. Considering the variability in VBD estimates from commercial software, the breast density information communicated to the patient should be based on radiologist’s interpretation. This work was supported in part by NIH R21 CA176470 and R21 CA134128. The contents are solely the responsibility of the authors and do not reflect the official views of the NIH or NCI.« less

  16. Design standards for experimental and field studies to evaluate diagnostic accuracy of tests for infectious diseases in aquatic animals.

    PubMed

    Laurin, E; Thakur, K K; Gardner, I A; Hick, P; Moody, N J G; Crane, M S J; Ernst, I

    2018-05-01

    Design and reporting quality of diagnostic accuracy studies (DAS) are important metrics for assessing utility of tests used in animal and human health. Following standards for designing DAS will assist in appropriate test selection for specific testing purposes and minimize the risk of reporting biased sensitivity and specificity estimates. To examine the benefits of recommending standards, design information from published DAS literature was assessed for 10 finfish, seven mollusc, nine crustacean and two amphibian diseases listed in the 2017 OIE Manual of Diagnostic Tests for Aquatic Animals. Of the 56 DAS identified, 41 were based on field testing, eight on experimental challenge studies and seven on both. Also, we adapted human and terrestrial-animal standards and guidelines for DAS structure for use in aquatic animal diagnostic research. Through this process, we identified and addressed important metrics for consideration at the design phase: study purpose, targeted disease state, selection of appropriate samples and specimens, laboratory analytical methods, statistical methods and data interpretation. These recommended design standards for DAS are presented as a checklist including risk-of-failure points and actions to mitigate bias at each critical step. Adherence to standards when designing DAS will also facilitate future systematic review and meta-analyses of DAS research literature. © 2018 John Wiley & Sons Ltd.

  17. Compositionality and Statistics in Adjective Acquisition: 4-Year-Olds Interpret "Tall" and "Short" Based on the Size Distributions of Novel Noun Referents

    ERIC Educational Resources Information Center

    Barner, David; Snedeker, Jesse

    2008-01-01

    Four experiments investigated 4-year-olds' understanding of adjective-noun compositionality and their sensitivity to statistics when interpreting scalar adjectives. In Experiments 1 and 2, children selected "tall" and "short" items from 9 novel objects called "pimwits" (1-9 in. in height) or from this array plus 4 taller or shorter distractor…

  18. Admixture, Population Structure, and F-Statistics.

    PubMed

    Peter, Benjamin M

    2016-04-01

    Many questions about human genetic history can be addressed by examining the patterns of shared genetic variation between sets of populations. A useful methodological framework for this purpose isF-statistics that measure shared genetic drift between sets of two, three, and four populations and can be used to test simple and complex hypotheses about admixture between populations. This article provides context from phylogenetic and population genetic theory. I review how F-statistics can be interpreted as branch lengths or paths and derive new interpretations, using coalescent theory. I further show that the admixture tests can be interpreted as testing general properties of phylogenies, allowing extension of some ideas applications to arbitrary phylogenetic trees. The new results are used to investigate the behavior of the statistics under different models of population structure and show how population substructure complicates inference. The results lead to simplified estimators in many cases, and I recommend to replace F3 with the average number of pairwise differences for estimating population divergence. Copyright © 2016 by the Genetics Society of America.

  19. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  20. Diagnostic accuracy of an iPhone DICOM viewer for the interpretation of magnetic resonance imaging of the knee.

    PubMed

    De Maio, Peter; White, Lawrence M; Bleakney, Robert; Menezes, Ravi J; Theodoropoulos, John

    2014-07-01

    To evaluate the diagnostic performance of viewing magnetic resonance (MR) images on a handheld mobile device compared with a conventional radiology workstation for the diagnosis of intra-articular knee pathology. Prospective comparison study. Tertiary care center. Fifty consecutive subjects who had MR imaging of the knee followed by knee arthroscopy were prospectively evaluated. Two musculoskeletal radiologists independently reviewed each MR study using 2 different viewers: the OsiriX DICOM viewer software on an Apple iPhone 3GS device and eFilm Workstation software on a conventional picture archiving and communications system workstation. Sensitivity and specificity of the iPhone and workstation interpretations was performed using knee arthroscopy as the reference standard. Intraobserver concordance and agreement between the iPhone and workstation interpretations were determined. There was no statistically significant difference between the 2 devices for each paired comparison of diagnostic performance. For the iPhone interpretations, sensitivity ranged from 77% (13 of 17) for the lateral meniscus to 100% (17 of 17) for the anterior cruciate ligament. Specificity ranged from 74% (14 of 19) for cartilage to 100% (50 of 50) for the posterior cruciate ligament. There was a very high level of interobserver and intraobserver agreement between devices and readers. The iPhone reads took longer than the corresponding workstation reads, with a significant mean difference between the iPhone and workstation reads of 3.98 minutes (P < 0.001). The diagnostic performance of interpreting MR images on a handheld mobile device for the assessment of intra-articular knee pathology is similar to that of a conventional radiology workstation, however, requires a longer viewing time. Timely and accurate interpretation of complex medical images using mobile device solutions could result in new workflow efficiencies and ultimately improve patient care.

  1. 75 FR 23755 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-04

    ... securities filings: Docket Numbers: ES10-35-000. Applicants: American Transmission Company LLC, ATC... Reliability Corporation for Approval of Interpretation to Reliability Standard CIP- 001--Cyber Security... Corporation for Approval of Interpretation to Reliability Standard [[Page 23756

  2. Statistical Literacy as a Function of Online versus Hybrid Course Delivery Format for an Introductory Graduate Statistics Course

    ERIC Educational Resources Information Center

    Hahs-Vaughn, Debbie L.; Acquaye, Hannah; Griffith, Matthew D.; Jo, Hang; Matthews, Ken; Acharya, Parul

    2017-01-01

    Statistical literacy refers to understanding fundamental statistical concepts. Assessment of statistical literacy can take the forms of tasks that require students to identify, translate, compute, read, and interpret data. In addition, statistical instruction can take many forms encompassing course delivery format such as face-to-face, hybrid,…

  3. Common pitfalls in statistical analysis: Clinical versus statistical significance

    PubMed Central

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In clinical research, study results, which are statistically significant are often interpreted as being clinically important. While statistical significance indicates the reliability of the study results, clinical significance reflects its impact on clinical practice. The third article in this series exploring pitfalls in statistical analysis clarifies the importance of differentiating between statistical significance and clinical significance. PMID:26229754

  4. Deriving allowable properties of lumber : a practical guide for interpretation of ASTM standards

    Treesearch

    Alan Bendtsen; William L. Galligan

    1978-01-01

    The ASTM standards for establishing clear wood mechanical properties and for deriving structural grades and related allowable properties for visually graded lumber can be confusing and difficult for the uninitiated to interpret. This report provides a practical guide to using these standards for individuals not familiar with their application. Sample stress...

  5. Statistical Significance Testing from Three Perspectives and Interpreting Statistical Significance and Nonsignificance and the Role of Statistics in Research.

    ERIC Educational Resources Information Center

    Levin, Joel R.; And Others

    1993-01-01

    Journal editors respond to criticisms of reliance on statistical significance in research reporting. Joel R. Levin ("Journal of Educational Psychology") defends its use, whereas William D. Schafer ("Measurement and Evaluation in Counseling and Development") emphasizes the distinction between statistically significant and important. William Asher…

  6. A summative, Objective, Structured, Clinical Examination in ENT used to assess postgraduate doctors after one year of ENT training, as part of the Diploma of Otorhinolaryngology, Head and Neck Surgery.

    PubMed

    Drake-Lee, A B; Skinner, D; Hawthorne, M; Clarke, R

    2009-10-01

    'High stakes' postgraduate medical examinations should conform to current educational standards. In the UK and Ireland, national assessments in surgery are devised and managed through the examination structure of the Royal Colleges of Surgeons. Their efforts are not reported in the medical education literature. In the current paper, we aim to clarify this process. To replace the clinical section of the Diploma of Otorhinolaryngology with an Objective, Structured, Clinical Examination, and to set the level of the assessment at one year of postgraduate training in the specialty. After 'blueprinting' against the whole curriculum, an Objective, Structured, Clinical Examination comprising 25 stations was divided into six clinical stations and 19 other stations exploring written case histories, instruments, test results, written communication skills and interpretation skills. The pass mark was set using a modified borderline method and other methods, and statistical analysis of the results was performed. The results of nine examinations between May 2004 and May 2008 are presented. The pass mark varied between 68 and 82 per cent. Internal consistency was good, with a Cronbach's alpha value of 0.99 for all examinations and split-half statistics varying from 0.96 to 0.99. Different standard settings gave similar pass marks. We have developed a summative, Objective, Structured, Clinical Examination for doctors training in otorhinolaryngology, reported herein. The objectives and standards of setting a high quality assessment were met.

  7. Are compression garments effective for the recovery of exercise-induced muscle damage? A systematic review with meta-analysis.

    PubMed

    Marqués-Jiménez, Diego; Calleja-González, Julio; Arratibel, Iñaki; Delextrat, Anne; Terrados, Nicolás

    2016-01-01

    The aim was to identify benefits of compression garments used for recovery of exercised-induced muscle damage. Computer-based literature research was performed in September 2015 using four online databases: Medline (PubMed), Cochrane, WOS (Web Of Science) and Scopus. The analysis of risk of bias was completed in accordance with the Cochrane Collaboration Guidelines. Mean differences and 95% confidence intervals were calculated with Hedges' g for continuous outcomes. A random effect meta-analysis model was used. Systematic differences (heterogeneity) were assessed with I(2) statistic. Most results obtained had high heterogeneity, thus their interpretation should be careful. Our findings showed that creatine kinase (standard mean difference=-0.02, 9 studies) was unaffected when using compression garments for recovery purposes. In contrast, blood lactate concentration was increased (standard mean difference=0.98, 5 studies). Applying compression reduced lactate dehydrogenase (standard mean difference=-0.52, 2 studies), muscle swelling (standard mean difference=-0.73, 5 studies) and perceptual measurements (standard mean difference=-0.43, 15 studies). Analyses of power (standard mean difference=1.63, 5 studies) and strength (standard mean difference=1.18, 8 studies) indicate faster recovery of muscle function after exercise. These results suggest that the application of compression clothing may aid in the recovery of exercise induced muscle damage, although the findings need corroboration. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. BTS statistical standards manual

    DOT National Transportation Integrated Search

    2005-10-01

    The Bureau of Transportation Statistics (BTS), like other federal statistical agencies, establishes professional standards to guide the methods and procedures for the collection, processing, storage, and presentation of statistical data. Standards an...

  9. The Sport Students’ Ability of Literacy and Statistical Reasoning

    NASA Astrophysics Data System (ADS)

    Hidayah, N.

    2017-03-01

    The ability of literacy and statistical reasoning is very important for the students of sport education college due to the materials of statistical learning can be taken from their many activities such as sport competition, the result of test and measurement, predicting achievement based on training, finding connection among variables, and others. This research tries to describe the sport education college students’ ability of literacy and statistical reasoning related to the identification of data type, probability, table interpretation, description and explanation by using bar or pie graphic, explanation of variability, interpretation, the calculation and explanation of mean, median, and mode through an instrument. This instrument is tested to 50 college students majoring in sport resulting only 26% of all students have the ability above 30% while others still below 30%. Observing from all subjects; 56% of students have the ability of identification data classification, 49% of students have the ability to read, display and interpret table through graphic, 27% students have the ability in probability, 33% students have the ability to describe variability, and 16.32% students have the ability to read, count and describe mean, median and mode. The result of this research shows that the sport students’ ability of literacy and statistical reasoning has not been adequate and students’ statistical study has not reached comprehending concept, literary ability trining and statistical rasoning, so it is critical to increase the sport students’ ability of literacy and statistical reasoning

  10. Fully Bayesian tests of neutrality using genealogical summary statistics.

    PubMed

    Drummond, Alexei J; Suchard, Marc A

    2008-10-31

    Many data summary statistics have been developed to detect departures from neutral expectations of evolutionary models. However questions about the neutrality of the evolution of genetic loci within natural populations remain difficult to assess. One critical cause of this difficulty is that most methods for testing neutrality make simplifying assumptions simultaneously about the mutational model and the population size model. Consequentially, rejecting the null hypothesis of neutrality under these methods could result from violations of either or both assumptions, making interpretation troublesome. Here we harness posterior predictive simulation to exploit summary statistics of both the data and model parameters to test the goodness-of-fit of standard models of evolution. We apply the method to test the selective neutrality of molecular evolution in non-recombining gene genealogies and we demonstrate the utility of our method on four real data sets, identifying significant departures of neutrality in human influenza A virus, even after controlling for variation in population size. Importantly, by employing a full model-based Bayesian analysis, our method separates the effects of demography from the effects of selection. The method also allows multiple summary statistics to be used in concert, thus potentially increasing sensitivity. Furthermore, our method remains useful in situations where analytical expectations and variances of summary statistics are not available. This aspect has great potential for the analysis of temporally spaced data, an expanding area previously ignored for limited availability of theory and methods.

  11. Statistical results on restorative dentistry experiments: effect of the interaction between main variables

    PubMed Central

    CAVALCANTI, Andrea Nóbrega; MARCHI, Giselle Maria; AMBROSANO, Gláucia Maria Bovi

    2010-01-01

    Statistical analysis interpretation is a critical field in scientific research. When there is more than one main variable being studied in a research, the effect of the interaction between those variables is fundamental on experiments discussion. However, some doubts can occur when the p-value of the interaction is greater than the significance level. Objective To determine the most adequate interpretation for factorial experiments with p-values of the interaction nearly higher than the significance level. Materials and methods The p-values of the interactions found in two restorative dentistry experiments (0.053 and 0.068) were interpreted in two distinct ways: considering the interaction as not significant and as significant. Results Different findings were observed between the two analyses, and studies results became more coherent when the significant interaction was used. Conclusion The p-value of the interaction between main variables must be analyzed with caution because it can change the outcomes of research studies. Researchers are strongly advised to interpret carefully the results of their statistical analysis in order to discuss the findings of their experiments properly. PMID:20857003

  12. Thermal infrared imaging of the variability of canopy-air temperature difference distribution for heavy metal stress levels discrimination in rice

    NASA Astrophysics Data System (ADS)

    Zhang, Biyao; Liu, Xiangnan; Liu, Meiling; Wang, Dongmin

    2017-04-01

    This paper addresses the assessment and interpretation of the canopy-air temperature difference (Tc-Ta) distribution as an indicator for discriminating between heavy metal stress levels. Tc-Ta distribution is simulated by coupling the energy balance equation with modified leaf angle distribution. Statistical indices including average value (AVG), standard deviation (SD), median, and span of Tc-Ta in the field of view of a digital thermal imager are calculated to describe Tc-Ta distribution quantitatively and, consequently, became the stress indicators. In the application, two grains of rice growing sites under "mild" and "severe" stress level were selected as study areas. A total of 96 thermal images obtained from the field measurements in the three growth stages were used for a separate application of a theoretical variation of Tc-Ta distribution. The results demonstrated that the statistical indices calculated from both simulated and measured data exhibited an upward trend as the stress level becomes serious because heavy metal stress would only raise a portion of the leaves in the canopy. Meteorological factors could barely affect the sensitivity of the statistical indices with the exception of the wind speed. Among the statistical indices, AVG and SD were demonstrated to be better indicators for stress levels discrimination.

  13. New methods and results for quantification of lightning-aircraft electrodynamics

    NASA Technical Reports Server (NTRS)

    Pitts, Felix L.; Lee, Larry D.; Perala, Rodney A.; Rudolph, Terence H.

    1987-01-01

    The NASA F-106 collected data on the rates of change of electromagnetic parameters on the aircraft surface during over 700 direct lightning strikes while penetrating thunderstorms at altitudes from 15,000 t0 40,000 ft (4,570 to 12,190 m). These in situ measurements provided the basis for the first statistical quantification of the lightning electromagnetic threat to aircraft appropriate for determining indirect lightning effects on aircraft. These data are used to update previous lightning criteria and standards developed over the years from ground-based measurements. The proposed standards will be the first which reflect actual aircraft responses measured at flight altitudes. Nonparametric maximum likelihood estimates of the distribution of the peak electromagnetic rates of change for consideration in the new standards are obtained based on peak recorder data for multiple-strike flights. The linear and nonlinear modeling techniques developed provide means to interpret and understand the direct-strike electromagnetic data acquired on the F-106. The reasonable results obtained with the models, compared with measured responses, provide increased confidence that the models may be credibly applied to other aircraft.

  14. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable

    PubMed Central

    Korjus, Kristjan; Hebart, Martin N.; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier’s generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term “Cross-validation and cross-testing” improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do. PMID:27564393

  15. Weighing Evidence “Steampunk” Style via the Meta-Analyser

    PubMed Central

    Bowden, Jack; Jackson, Chris

    2016-01-01

    ABSTRACT The funnel plot is a graphical visualization of summary data estimates from a meta-analysis, and is a useful tool for detecting departures from the standard modeling assumptions. Although perhaps not widely appreciated, a simple extension of the funnel plot can help to facilitate an intuitive interpretation of the mathematics underlying a meta-analysis at a more fundamental level, by equating it to determining the center of mass of a physical system. We used this analogy to explain the concepts of weighing evidence and of biased evidence to a young audience at the Cambridge Science Festival, without recourse to precise definitions or statistical formulas and with a little help from Sherlock Holmes! Following on from the science fair, we have developed an interactive web-application (named the Meta-Analyser) to bring these ideas to a wider audience. We envisage that our application will be a useful tool for researchers when interpreting their data. First, to facilitate a simple understanding of fixed and random effects modeling approaches; second, to assess the importance of outliers; and third, to show the impact of adjusting for small study bias. This final aim is realized by introducing a novel graphical interpretation of the well-known method of Egger regression. PMID:28003684

  16. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable.

    PubMed

    Korjus, Kristjan; Hebart, Martin N; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier's generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term "Cross-validation and cross-testing" improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do.

  17. Application of a Multivariant, Caucasian-Specific, Genotyped Donor Panel for Performance Validation of MDmulticard®, ID-System®, and Scangel® RhD/ABO Serotyping

    PubMed Central

    Gassner, Christoph; Rainer, Esther; Pircher, Elfriede; Markut, Lydia; Körmöczi, Günther F.; Jungbauer, Christof; Wessin, Dietmar; Klinghofer, Roswitha; Schennach, Harald; Schwind, Peter; Schönitzer, Diether

    2009-01-01

    Summary Background Validations of routinely used serological typing methods require intense performance evaluations typically including large numbers of samples before routine application. However, such evaluations could be improved considering information about the frequency of standard blood groups and their variants. Methods Using RHD and ABO population genetic data, a Caucasian-specific donor panel was compiled for a performance comparison of the three RhD and ABO serological typing methods MDmulticard (Medion Diagnostics), ID-System (DiaMed) and ScanGel (Bio-Rad). The final test panel included standard and variant RHD and ABO genotypes, e.g. RhD categories, partial and weak RhDs, RhD DELs, and ABO samples, mainly to interpret weak serological reactivity for blood group A specificity. All samples were from individuals recorded in our local DNA blood group typing database. Results For ‘standard’ blood groups, results of performance were clearly interpretable for all three serological methods compared. However, when focusing on specific variant phenotypes, pronounced differences in reaction strengths and specificities were observed between them. Conclusions A genetically and ethnically predefined donor test panel consisting of 93 individual samples only, delivered highly significant results for serological performance comparisons. Such small panels offer impressive representative powers, higher as such based on statistical chances and large numbers only. PMID:21113264

  18. Hold My Calls: An Activity for Introducing the Statistical Process

    ERIC Educational Resources Information Center

    Abel, Todd; Poling, Lisa

    2015-01-01

    Working with practicing teachers, this article demonstrates, through the facilitation of a statistical activity, how to introduce and investigate the unique qualities of the statistical process including: formulate a question, collect data, analyze data, and interpret data.

  19. The Statistics of wood assays for preservative retention

    Treesearch

    Patricia K. Lebow; Scott W. Conklin

    2011-01-01

    This paper covers general statistical concepts that apply to interpreting wood assay retention values. In particular, since wood assays are typically obtained from a single composited sample, the statistical aspects, including advantages and disadvantages, of simple compositing are covered.

  20. Surveys Assessing Students' Attitudes toward Statistics: A Systematic Review of Validity and Reliability

    ERIC Educational Resources Information Center

    Nolan, Meaghan M.; Beran, Tanya; Hecker, Kent G.

    2012-01-01

    Students with positive attitudes toward statistics are likely to show strong academic performance in statistics courses. Multiple surveys measuring students' attitudes toward statistics exist; however, a comparison of the validity and reliability of interpretations based on their scores is needed. A systematic review of relevant electronic…

  1. Interpretation of statistical results.

    PubMed

    García Garmendia, J L; Maroto Monserrat, F

    2018-02-21

    The appropriate interpretation of the statistical results is crucial to understand the advances in medical science. The statistical tools allow us to transform the uncertainty and apparent chaos in nature to measurable parameters which are applicable to our clinical practice. The importance of understanding the meaning and actual extent of these instruments is essential for researchers, the funders of research and for professionals who require a permanent update based on good evidence and supports to decision making. Various aspects of the designs, results and statistical analysis are reviewed, trying to facilitate his comprehension from the basics to what is most common but no better understood, and bringing a constructive, non-exhaustive but realistic look. Copyright © 2018 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.

  2. Sulfur in Cometary Dust

    NASA Technical Reports Server (NTRS)

    Fomenkova, M. N.

    1997-01-01

    The computer-intensive project consisted of the analysis and synthesis of existing data on composition of comet Halley dust particles. The main objective was to obtain a complete inventory of sulfur containing compounds in the comet Halley dust by building upon the existing classification of organic and inorganic compounds and applying a variety of statistical techniques for cluster and cross-correlational analyses. A student hired for this project wrote and tested the software to perform cluster analysis. The following tasks were carried out: (1) selecting the data from existing database for the proposed project; (2) finding access to a standard library of statistical routines for cluster analysis; (3) reformatting the data as necessary for input into the library routines; (4) performing cluster analysis and constructing hierarchical cluster trees using three methods to define the proximity of clusters; (5) presenting the output results in different formats to facilitate the interpretation of the obtained cluster trees; (6) selecting groups of data points common for all three trees as stable clusters. We have also considered the chemistry of sulfur in inorganic compounds.

  3. Statistical analysis of Hasegawa-Wakatani turbulence

    NASA Astrophysics Data System (ADS)

    Anderson, Johan; Hnat, Bogdan

    2017-06-01

    Resistive drift wave turbulence is a multipurpose paradigm that can be used to understand transport at the edge of fusion devices. The Hasegawa-Wakatani model captures the essential physics of drift turbulence while retaining the simplicity needed to gain a qualitative understanding of this process. We provide a theoretical interpretation of numerically generated probability density functions (PDFs) of intermittent events in Hasegawa-Wakatani turbulence with enforced equipartition of energy in large scale zonal flows, and small scale drift turbulence. We find that for a wide range of adiabatic index values, the stochastic component representing the small scale turbulent eddies of the flow, obtained from the autoregressive integrated moving average model, exhibits super-diffusive statistics, consistent with intermittent transport. The PDFs of large events (above one standard deviation) are well approximated by the Laplace distribution, while small events often exhibit a Gaussian character. Furthermore, there exists a strong influence of zonal flows, for example, via shearing and then viscous dissipation maintaining a sub-diffusive character of the fluxes.

  4. Grain size analysis and depositional environment of shallow marine to basin floor, Kelantan River Delta

    NASA Astrophysics Data System (ADS)

    Afifah, M. R. Nurul; Aziz, A. Che; Roslan, M. Kamal

    2015-09-01

    Sediment samples were collected from the shallow marine from Kuala Besar, Kelantan outwards to the basin floor of South China Sea which consisted of quaternary bottom sediments. Sixty five samples were analysed for their grain size distribution and statistical relationships. Basic statistical analysis like mean, standard deviation, skewness and kurtosis were calculated and used to differentiate the depositional environment of the sediments and to derive the uniformity of depositional environment either from the beach or river environment. The sediments of all areas were varied in their sorting ranging from very well sorted to poorly sorted, strongly negative skewed to strongly positive skewed, and extremely leptokurtic to very platykurtic in nature. Bivariate plots between the grain-size parameters were then interpreted and the Coarsest-Median (CM) pattern showed the trend suggesting relationships between sediments influenced by three ongoing hydrodynamic factors namely turbidity current, littoral drift and waves dynamic, which functioned to control the sediments distribution pattern in various ways.

  5. A whirling plane of satellite galaxies around Centaurus A challenges cold dark matter cosmology.

    PubMed

    Müller, Oliver; Pawlowski, Marcel S; Jerjen, Helmut; Lelli, Federico

    2018-02-02

    The Milky Way and Andromeda galaxies are each surrounded by a thin plane of satellite dwarf galaxies that may be corotating. Cosmological simulations predict that most satellite galaxy systems are close to isotropic with random motions, so those two well-studied systems are often interpreted as rare statistical outliers. We test this assumption using the kinematics of satellite galaxies around the Centaurus A galaxy. Our statistical analysis reveals evidence for corotation in a narrow plane: Of the 16 Centaurus A satellites with kinematic data, 14 follow a coherent velocity pattern aligned with the long axis of their spatial distribution. In standard cosmological simulations, <0.5% of Centaurus A-like systems show such behavior. Corotating satellite systems may be common in the universe, challenging small-scale structure formation in the prevailing cosmological paradigm. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  6. An argument for mechanism-based statistical inference in cancer

    PubMed Central

    Ochs, Michael; Price, Nathan D.; Tomasetti, Cristian; Younes, Laurent

    2015-01-01

    Cancer is perhaps the prototypical systems disease, and as such has been the focus of extensive study in quantitative systems biology. However, translating these programs into personalized clinical care remains elusive and incomplete. In this perspective, we argue that realizing this agenda—in particular, predicting disease phenotypes, progression and treatment response for individuals—requires going well beyond standard computational and bioinformatics tools and algorithms. It entails designing global mathematical models over network-scale configurations of genomic states and molecular concentrations, and learning the model parameters from limited available samples of high-dimensional and integrative omics data. As such, any plausible design should accommodate: biological mechanism, necessary for both feasible learning and interpretable decision making; stochasticity, to deal with uncertainty and observed variation at many scales; and a capacity for statistical inference at the patient level. This program, which requires a close, sustained collaboration between mathematicians and biologists, is illustrated in several contexts, including learning bio-markers, metabolism, cell signaling, network inference and tumorigenesis. PMID:25381197

  7. Interpreting international governance standards for health IT use within general medical practice.

    PubMed

    Mahncke, Rachel J; Williams, Patricia A H

    2014-01-01

    General practices in Australia recognise the importance of comprehensive protective security measures. Some elements of information security governance are incorporated into recommended standards, however the governance component of information security is still insufficiently addressed in practice. The International Organistion for Standardisation (ISO) released a new global standard in May 2013 entitled, ISO/IEC 27014:2013 Information technology - Security techniques - Governance of information security. This standard, applicable to organisations of all sizes, offers a framework against which to assess and implement the governance components of information security. The standard demonstrates the relationship between governance and the management of information security, provides strategic principles and processes, and forms the basis for establishing a positive information security culture. An analysis interpretation of this standard for use in Australian general practice was performed. This work is unique as such interpretation for the Australian healthcare environment has not been undertaken before. It demonstrates an application of the standard at a strategic level to inform existing development of an information security governance framework.

  8. Myths and Misconceptions in Fall Protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epp, R J

    2006-02-23

    Since 1973, when OSHA CFRs 1910 and 1926 began to influence the workplace, confusion about the interpretation of the standards has been a problem and fall protection issues are among them. This confusion is verified by the issuance of 351 (as of 11/25/05) Standard Interpretations issued by OSHA in response to formally submitted questions asking for clarification. Over the years, many workers and too many ES&H Professionals have become 'self-interpreters', reaching conclusions that do not conform to either the Standards or the published Interpretations. One conclusion that has been reached by the author is that many ES&H Professionals are eithermore » not aware of, or do not pay attention to the Standard Interpretations issued by OSHA, or the State OSHA interpretation mechanism, whoever has jurisdiction. If you fall in this category, you are doing your organization or clients a disservice and are not providing them with the best information available. Several myths and/or misconceptions have been promulgated to the point that they become accepted fact, until an incident occurs and OSHA becomes involved. For example, one very pervasive myth is that you are in compliance as long as you maintain a distance of 6 feet from the edge. No such carte blanche rule exists. In this presentation, this myth and several other common myths/misconceptions will be discussed. This presentation is focused only on Federal OSHA CFR1910 Subpart D--Walking-Working Surfaces, CFR1926 Subpart M--Fall Protection and the Fall Protection Standard Interpretation Letters. This presentation does not cover steel erection, aerial lifts and other fall protection issues. Your regulations will probably be different than those presented if you are operating under a State plan.« less

  9. Using a Standardized Clinical Quantitative Sensory Testing Battery to Judge the Clinical Relevance of Sensory Differences Between Adjacent Body Areas.

    PubMed

    Dimova, Violeta; Oertel, Bruno G; Lötsch, Jörn

    2017-01-01

    Skin sensitivity to sensory stimuli varies among different body areas. A standardized clinical quantitative sensory testing (QST) battery, established for the diagnosis of neuropathic pain, was used to assess whether the magnitude of differences between test sites reaches clinical significance. Ten different sensory QST measures derived from thermal and mechanical stimuli were obtained from 21 healthy volunteers (10 men) and used to create somatosensory profiles bilateral from the dorsum of the hands (the standard area for the assessment of normative values for the upper extremities as proposed by the German Research Network on Neuropathic Pain) and bilateral at volar forearms as a neighboring nonstandard area. The parameters obtained were statistically compared between test sites. Three of the 10 QST parameters differed significantly with respect to the "body area," that is, warmth detection, thermal sensory limen, and mechanical pain thresholds. After z-transformation and interpretation according to the QST battery's standard instructions, 22 abnormal values were obtained at the hand. Applying the same procedure to parameters assessed at the nonstandard site forearm, that is, z-transforming them to the reference values for the hand, 24 measurements values emerged as abnormal, which was not significantly different compared with the hand (P=0.4185). Sensory differences between neighboring body areas are statistically significant, reproducing prior knowledge. This has to be considered in scientific assessments where a small variation of the tested body areas may not be an option. However, the magnitude of these differences was below the difference in sensory parameters that is judged as abnormal, indicating a robustness of the QST instrument against protocol deviations with respect to the test area when using the method of comparison with a 95 % confidence interval of a reference dataset.

  10. Disaster response team FAST skills training with a portable ultrasound simulator compared to traditional training: pilot study.

    PubMed

    Paddock, Michael T; Bailitz, John; Horowitz, Russ; Khishfe, Basem; Cosby, Karen; Sergel, Michelle J

    2015-03-01

    Pre-hospital focused assessment with sonography in trauma (FAST) has been effectively used to improve patient care in multiple mass casualty events throughout the world. Although requisite FAST knowledge may now be learned remotely by disaster response team members, traditional live instructor and model hands-on FAST skills training remains logistically challenging. The objective of this pilot study was to compare the effectiveness of a novel portable ultrasound (US) simulator with traditional FAST skills training for a deployed mixed provider disaster response team. We randomized participants into one of three training groups stratified by provider role: Group A. Traditional Skills Training, Group B. US Simulator Skills Training, and Group C. Traditional Skills Training Plus US Simulator Skills Training. After skills training, we measured participants' FAST image acquisition and interpretation skills using a standardized direct observation tool (SDOT) with healthy models and review of FAST patient images. Pre- and post-course US and FAST knowledge were also assessed using a previously validated multiple-choice evaluation. We used the ANOVA procedure to determine the statistical significance of differences between the means of each group's skills scores. Paired sample t-tests were used to determine the statistical significance of pre- and post-course mean knowledge scores within groups. We enrolled 36 participants, 12 randomized to each training group. Randomization resulted in similar distribution of participants between training groups with respect to provider role, age, sex, and prior US training. For the FAST SDOT image acquisition and interpretation mean skills scores, there was no statistically significant difference between training groups. For US and FAST mean knowledge scores, there was a statistically significant improvement between pre- and post-course scores within each group, but again there was not a statistically significant difference between training groups. This pilot study of a deployed mixed-provider disaster response team suggests that a novel portable US simulator may provide equivalent skills training in comparison to traditional live instructor and model training. Further studies with a larger sample size and other measures of short- and long-term clinical performance are warranted.

  11. Impact of Integrated Science and English Language Arts Literacy Supplemental Instructional Intervention on Science Academic Achievement of Elementary Students

    NASA Astrophysics Data System (ADS)

    Marks, Jamar Terry

    The purpose of this quasi-experimental, nonequivalent pretest-posttest control group design study was to determine if any differences existed in upper elementary school students' science academic achievement when instructed using an 8-week integrated science and English language arts literacy supplemental instructional intervention in conjunction with traditional science classroom instruction as compared to when instructed using solely traditional science classroom instruction. The targeted sample population consisted of fourth-grade students enrolled in a public elementary school located in the southeastern region of the United States. The convenience sample size consisted of 115 fourth-grade students enrolled in science classes. The pretest and posttest academic achievement data collected consisted of the science segment from the Spring 2015, and Spring 2016 state standardized assessments. Pretest and posttest academic achievement data were analyzed using an ANCOVA statistical procedure to test for differences, and the researcher reported the results of the statistical analysis. The results of the study show no significant difference in science academic achievement between treatment and control groups. An interpretation of the results and recommendations for future research were provided by the researcher upon completion of the statistical analysis.

  12. Rainfall Results of the Florida Area Cumulus Experiment, 1970-76.

    NASA Astrophysics Data System (ADS)

    Woodley, William L.; Jordan, Jill; Barnston, Anthony; Simpson, Joanne; Biondini, Ron; Flueck, John

    1982-02-01

    The Florida Area Cumulus Experiment of 1970-76 (FACE-1) is a single-area, randomized, exploratory experiment to determine whether seeding cumuli for dynamic effects (dynamic seeding) can be used to augment convective rainfall over a substantial target area (1.3 × 104 km2) in south Florida. Rainfall is estimated using S-band radar observations after adjustment by raingages. The two primary response variables are rain volumes in the total target (TT) and in the floating target (FT), the most intensely treated portion of the target. The experimental unit is the day and the main observational period is the 6 h after initiation of treatment (silver iodide flares on seed days and either no flares or placebos on control days). Analyses without predictors suggest apparent increases in both the location (means and medians) and the dispersion (standard deviation and interquartile range) characteristics of rainfall due to seeding in the FT and TT variables with substantial statistical support for the FT results and lesser statistical support for the TT results. Analyses of covariance using meteorologically meaningful predictor variables suggest a somewhat larger effect of seeding with stronger statistical support. These results are interpreted in terms of the FACE conceptual model.

  13. Statistics corner: A guide to appropriate use of correlation coefficient in medical research.

    PubMed

    Mukaka, M M

    2012-09-01

    Correlation is a statistical method used to assess a possible linear association between two continuous variables. It is simple both to calculate and to interpret. However, misuse of correlation is so common among researchers that some statisticians have wished that the method had never been devised at all. The aim of this article is to provide a guide to appropriate use of correlation in medical research and to highlight some misuse. Examples of the applications of the correlation coefficient have been provided using data from statistical simulations as well as real data. Rule of thumb for interpreting size of a correlation coefficient has been provided.

  14. Statistical Estimation of Heterogeneities: A New Frontier in Well Testing

    NASA Astrophysics Data System (ADS)

    Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.

    2001-12-01

    Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.

  15. Investigation of Statistical Inference Methodologies Through Scale Model Propagation Experiments

    DTIC Science & Technology

    2015-09-30

    statistical inference methodologies for ocean- acoustic problems by investigating and applying statistical methods to data collected from scale-model...to begin planning experiments for statistical inference applications. APPROACH In the ocean acoustics community over the past two decades...solutions for waveguide parameters. With the introduction of statistical inference to the field of ocean acoustics came the desire to interpret marginal

  16. On Ruch's Principle of Decreasing Mixing Distance in classical statistical physics

    NASA Astrophysics Data System (ADS)

    Busch, Paul; Quadt, Ralf

    1990-10-01

    Ruch's Principle of Decreasing Mixing Distance is reviewed as a statistical physical principle and its basic suport and geometric interpretation, the Ruch-Schranner-Seligman theorem, is generalized to be applicable to a large representative class of classical statistical systems.

  17. Setting Performance Standards for Technical and Nontechnical Competence in General Surgery.

    PubMed

    Szasz, Peter; Bonrath, Esther M; Louridas, Marisa; Fecso, Andras B; Howe, Brett; Fehr, Adam; Ott, Michael; Mack, Lloyd A; Harris, Kenneth A; Grantcharov, Teodor P

    2017-07-01

    The objectives of this study were to (1) create a technical and nontechnical performance standard for the laparoscopic cholecystectomy, (2) assess the classification accuracy and (3) credibility of these standards, (4) determine a trainees' ability to meet both standards concurrently, and (5) delineate factors that predict standard acquisition. Scores on performance assessments are difficult to interpret in the absence of established standards. Trained raters observed General Surgery residents performing laparoscopic cholecystectomies using the Objective Structured Assessment of Technical Skill (OSATS) and the Objective Structured Assessment of Non-Technical Skills (OSANTS) instruments, while as also providing a global competent/noncompetent decision for each performance. The global decision was used to divide the trainees into 2 contrasting groups and the OSATS or OSANTS scores were graphed per group to determine the performance standard. Parametric statistics were used to determine classification accuracy and concurrent standard acquisition, receiver operator characteristic (ROC) curves were used to delineate predictive factors. Thirty-six trainees were observed 101 times. The technical standard was an OSATS of 21.04/35.00 and the nontechnical standard an OSANTS of 22.49/35.00. Applying these standards, competent/noncompetent trainees could be discriminated in 94% of technical and 95% of nontechnical performances (P < 0.001). A 21% discordance between technically and nontechnically competent trainees was identified (P < 0.001). ROC analysis demonstrated case experience and trainee level were both able to predict achieving the standards with an area under the curve (AUC) between 0.83 and 0.96 (P < 0.001). The present study presents defensible standards for technical and nontechnical performance. Such standards are imperative to implementing summative assessments into surgical training.

  18. The use of lower resolution viewing devices for mammographic interpretation: implications for education and training.

    PubMed

    Chen, Yan; James, Jonathan J; Turnbull, Anne E; Gale, Alastair G

    2015-10-01

    To establish whether lower resolution, lower cost viewing devices have the potential to deliver mammographic interpretation training. On three occasions over eight months, fourteen consultant radiologists and reporting radiographers read forty challenging digital mammography screening cases on three different displays: a digital mammography workstation, a standard LCD monitor, and a smartphone. Standard image manipulation software was available for use on all three devices. Receiver operating characteristic (ROC) analysis and ANOVA (Analysis of Variance) were used to determine the significance of differences in performance between the viewing devices with/without the application of image manipulation software. The effect of reader's experience was also assessed. Performance was significantly higher (p < .05) on the mammography workstation compared to the other two viewing devices. When image manipulation software was applied to images viewed on the standard LCD monitor, performance improved to mirror levels seen on the mammography workstation with no significant difference between the two. Image interpretation on the smartphone was uniformly poor. Film reader experience had no significant effect on performance across all three viewing devices. Lower resolution standard LCD monitors combined with appropriate image manipulation software are capable of displaying mammographic pathology, and are potentially suitable for delivering mammographic interpretation training. • This study investigates potential devices for training in mammography interpretation. • Lower resolution standard LCD monitors are potentially suitable for mammographic interpretation training. • The effect of image manipulation tools on mammography workstation viewing is insignificant. • Reader experience had no significant effect on performance in all viewing devices. • Smart phones are not suitable for displaying mammograms.

  19. Does periodic lung screening of films meets standards?

    PubMed

    Binay, Songul; Arbak, Peri; Safak, Alp Alper; Balbay, Ege Gulec; Bilgin, Cahit; Karatas, Naciye

    2016-01-01

    To determine whether the workers' periodic chest x-ray screening techniques in accordance with the quality standards is the responsibility of physicians. Evaluation of differences of interpretations by physicians in different levels of education and the importance of standardization of interpretation. Previously taken chest radiographs of 400 workers who are working in a factory producing the glass run channels were evaluated according to technical and quality standards by three observers (pulmonologist, radiologist, pulmonologist assistant). There was a perfect concordance between radiologist and pulmonologist for the underpenetrated films. Whereas there was perfect concordance between pulmonologist and pulmonologist assistant for over penetrated films. Pulmonologist (52%) has interpreted the dose of the films as regular more than other observers (radiologist; 44.3%, pulmonologist assistant; 30.4%). The frequency of interpretation of the films as taken in inspiratory phase by the pulmonologist (81.7%) was less than other observers (radiologist; 92.1%, pulmonologist assistant; 92.6%). The rate of the pulmonologist (53.5%) was higher than the other observers (radiologist; 44.6%, pulmonologist assistant; 41.8%) for the assessment of the positioning of the patients as symmetrical. Pulmonologist assistant (15.3%) was the one who most commonly reported the parenchymal findings (radiologist; 2.2%, pulmonologist; 12.9%). It is necessary to reorganize the technical standards and exposure procedures for improving the quality of the chest radiographs. The reappraisal of all interpreters and continuous training of technicians is required.

  20. HiCRep: assessing the reproducibility of Hi-C data using a stratum-adjusted correlation coefficient

    PubMed Central

    Yang, Tao; Zhang, Feipeng; Yardımcı, Galip Gürkan; Song, Fan; Hardison, Ross C.; Noble, William Stafford; Yue, Feng; Li, Qunhua

    2017-01-01

    Hi-C is a powerful technology for studying genome-wide chromatin interactions. However, current methods for assessing Hi-C data reproducibility can produce misleading results because they ignore spatial features in Hi-C data, such as domain structure and distance dependence. We present HiCRep, a framework for assessing the reproducibility of Hi-C data that systematically accounts for these features. In particular, we introduce a novel similarity measure, the stratum adjusted correlation coefficient (SCC), for quantifying the similarity between Hi-C interaction matrices. Not only does it provide a statistically sound and reliable evaluation of reproducibility, SCC can also be used to quantify differences between Hi-C contact matrices and to determine the optimal sequencing depth for a desired resolution. The measure consistently shows higher accuracy than existing approaches in distinguishing subtle differences in reproducibility and depicting interrelationships of cell lineages. The proposed measure is straightforward to interpret and easy to compute, making it well-suited for providing standardized, interpretable, automatable, and scalable quality control. The freely available R package HiCRep implements our approach. PMID:28855260

  1. Comprehensive characterisation of groundwater quality in and around a landfill area for agricultural suitability

    NASA Astrophysics Data System (ADS)

    Hariharan, V.; Chilambarasan, L.; Nandhakumar, G.; Porchelvan, P.

    2017-11-01

    Groundwater contamination has become so alarming that the existing valuable freshwater resources are at stake. Landfilling of solid refuse without pre-emptive measures, over the years, leads to the utter depletion of the groundwater quality in its vicinity. The Kodungaiyur landfill at the Perambur taluk located in the northernmost region of the Chennai metropolitan, is such a poorly managed landfill. This research article is intended to exhibit a detailed study report on the physicochemical and bacteriological parametric analyses of the currently available subsurface water in and around the landfill area. Besides being evident from the faecal coliform test that the water is not potable, the chief objective was to investigate the suitability of groundwater for irrigation. Representative samples of groundwater were collected from inside the landfill site, and the residential areas located within 2 km from the site and analysed using standard methods. The test results were interpreted by employing exhaustive statistical approaches. It is evident to the interpretations that, out of the nine sampled locations, seven were found to be endowed with a groundwater quality fit enough for irrigation.

  2. 10 CFR 20.1006 - Interpretations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...

  3. 10 CFR 20.1006 - Interpretations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...

  4. 10 CFR 20.1006 - Interpretations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...

  5. 10 CFR 20.1006 - Interpretations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...

  6. 10 CFR 20.1006 - Interpretations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...

  7. Menzerath-Altmann Law: Statistical Mechanical Interpretation as Applied to a Linguistic Organization

    NASA Astrophysics Data System (ADS)

    Eroglu, Sertac

    2014-10-01

    The distribution behavior described by the empirical Menzerath-Altmann law is frequently encountered during the self-organization of linguistic and non-linguistic natural organizations at various structural levels. This study presents a statistical mechanical derivation of the law based on the analogy between the classical particles of a statistical mechanical organization and the distinct words of a textual organization. The derived model, a transformed (generalized) form of the Menzerath-Altmann model, was termed as the statistical mechanical Menzerath-Altmann model. The derived model allows interpreting the model parameters in terms of physical concepts. We also propose that many organizations presenting the Menzerath-Altmann law behavior, whether linguistic or not, can be methodically examined by the transformed distribution model through the properly defined structure-dependent parameter and the energy associated states.

  8. Evaluation of Rigid-Body Motion Compensation in Cardiac Perfusion SPECT Employing Polar-Map Quantification

    PubMed Central

    Pretorius, P. Hendrik; Johnson, Karen L.; King, Michael A.

    2016-01-01

    We have recently been successful in the development and testing of rigid-body motion tracking, estimation and compensation for cardiac perfusion SPECT based on a visual tracking system (VTS). The goal of this study was to evaluate in patients the effectiveness of our rigid-body motion compensation strategy. Sixty-four patient volunteers were asked to remain motionless or execute some predefined body motion during an additional second stress perfusion acquisition. Acquisitions were performed using the standard clinical protocol with 64 projections acquired through 180 degrees. All data were reconstructed with an ordered-subsets expectation-maximization (OSEM) algorithm using 4 projections per subset and 5 iterations. All physical degradation factors were addressed (attenuation, scatter, and distance dependent resolution), while a 3-dimensional Gaussian rotator was used during reconstruction to correct for six-degree-of-freedom (6-DOF) rigid-body motion estimated by the VTS. Polar map quantification was employed to evaluate compensation techniques. In 54.7% of the uncorrected second stress studies there was a statistically significant difference in the polar maps, and in 45.3% this made a difference in the interpretation of segmental perfusion. Motion correction reduced the impact of motion such that with it 32.8 % of the polar maps were statistically significantly different, and in 14.1% this difference changed the interpretation of segmental perfusion. The improvement shown in polar map quantitation translated to visually improved uniformity of the SPECT slices. PMID:28042170

  9. Decision trees in epidemiological research.

    PubMed

    Venkatasubramaniam, Ashwini; Wolfson, Julian; Mitchell, Nathan; Barnes, Timothy; JaKa, Meghan; French, Simone

    2017-01-01

    In many studies, it is of interest to identify population subgroups that are relatively homogeneous with respect to an outcome. The nature of these subgroups can provide insight into effect mechanisms and suggest targets for tailored interventions. However, identifying relevant subgroups can be challenging with standard statistical methods. We review the literature on decision trees, a family of techniques for partitioning the population, on the basis of covariates, into distinct subgroups who share similar values of an outcome variable. We compare two decision tree methods, the popular Classification and Regression tree (CART) technique and the newer Conditional Inference tree (CTree) technique, assessing their performance in a simulation study and using data from the Box Lunch Study, a randomized controlled trial of a portion size intervention. Both CART and CTree identify homogeneous population subgroups and offer improved prediction accuracy relative to regression-based approaches when subgroups are truly present in the data. An important distinction between CART and CTree is that the latter uses a formal statistical hypothesis testing framework in building decision trees, which simplifies the process of identifying and interpreting the final tree model. We also introduce a novel way to visualize the subgroups defined by decision trees. Our novel graphical visualization provides a more scientifically meaningful characterization of the subgroups identified by decision trees. Decision trees are a useful tool for identifying homogeneous subgroups defined by combinations of individual characteristics. While all decision tree techniques generate subgroups, we advocate the use of the newer CTree technique due to its simplicity and ease of interpretation.

  10. Calibrated Peer Review for Interpreting Linear Regression Parameters: Results from a Graduate Course

    ERIC Educational Resources Information Center

    Enders, Felicity B.; Jenkins, Sarah; Hoverman, Verna

    2010-01-01

    Biostatistics is traditionally a difficult subject for students to learn. While the mathematical aspects are challenging, it can also be demanding for students to learn the exact language to use to correctly interpret statistical results. In particular, correctly interpreting the parameters from linear regression is both a vital tool and a…

  11. 48 CFR 9904.406-61 - Interpretation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.406-61 Interpretation. (a) Questions have arisen as to... categories of costs that have been included in the past and may be considered in the future as restructuring... restructuring costs shall not exceed five years. The straight-line method of amortization should normally be...

  12. Confidence of compliance: a Bayesian approach for percentile standards.

    PubMed

    McBride, G B; Ellis, J C

    2001-04-01

    Rules for assessing compliance with percentile standards commonly limit the number of exceedances permitted in a batch of samples taken over a defined assessment period. Such rules are commonly developed using classical statistical methods. Results from alternative Bayesian methods are presented (using beta-distributed prior information and a binomial likelihood), resulting in "confidence of compliance" graphs. These allow simple reading of the consumer's risk and the supplier's risks for any proposed rule. The influence of the prior assumptions required by the Bayesian technique on the confidence results is demonstrated, using two reference priors (uniform and Jeffreys') and also using optimistic and pessimistic user-defined priors. All four give less pessimistic results than does the classical technique, because interpreting classical results as "confidence of compliance" actually invokes a Bayesian approach with an extreme prior distribution. Jeffreys' prior is shown to be the most generally appropriate choice of prior distribution. Cost savings can be expected using rules based on this approach.

  13. Protocol for monitoring metals in Ozark National Scenic Riverways, Missouri: Version 1.0

    USGS Publications Warehouse

    Schmitt, Christopher J.; Brumbaugh, William G.; Besser, John M.; Hinck, Jo Ellen; Bowles, David E.; Morrison, Lloyd W.; Williams, Michael H.

    2008-01-01

    The National Park Service is developing a monitoring plan for the Ozark National Scenic Riverways in southeastern Missouri. Because of concerns about the release of lead, zinc, and other metals from lead-zinc mining to streams, the monitoring plan will include mining-related metals. After considering a variety of alternatives, the plan will consist of measuring the concentrations of cadmium, cobalt, lead, nickel, and zinc in composite samples of crayfish (Orconectes luteus or alternate species) and Asian clam (Corbicula fluminea) collected periodically from selected sites. This document, which comprises a protocol narrative and supporting standard operating procedures, describes the methods to be employed prior to, during, and after collection of the organisms, along with procedures for their chemical analysis and quality assurance; statistical analysis, interpretation, and reporting of the data; and for modifying the protocol narrative and supporting standard operating procedures. A list of supplies and equipment, data forms, and sample labels are also included. An example based on data from a pilot study is presented.

  14. Search for a Dark Photon in e + e - Collisions at BaBar

    DOE PAGES

    Lees, J. P.; Poireau, V.; Tisserand, V.; ...

    2014-11-10

    Dark sectors charged under a new Abelian interaction have recently received much attention in the context of dark matter models. These models introduce a light new mediator, the so-called dark photon (A'), connecting the dark sector to the standard model. We present a search for a dark photon in the reaction e +e -→γA', A'→e +e -, μ +μ - using 514 fb -1 of data collected with the BABAR detector. We observe no statistically significant deviations from the standard model predictions, and we set 90% confidence level upper limits on the mixing strength between the photon and dark photonmore » at the level of10 -4-10 -3 for dark photon masses in the range 0.02–10.2 GeV We further constrain the range of the parameter space favored by interpretations of the discrepancy between the calculated and measured anomalous magnetic moment of the muon.« less

  15. Leak Rate Quantification Method for Gas Pressure Seals with Controlled Pressure Differential

    NASA Technical Reports Server (NTRS)

    Daniels, Christopher C.; Braun, Minel J.; Oravec, Heather A.; Mather, Janice L.; Taylor, Shawn C.

    2015-01-01

    An enhancement to the pressure decay leak rate method with mass point analysis solved deficiencies in the standard method. By adding a control system, a constant gas pressure differential across the test article was maintained. As a result, the desired pressure condition was met at the onset of the test, and the mass leak rate and measurement uncertainty were computed in real-time. The data acquisition and control system were programmed to automatically stop when specified criteria were met. Typically, the test was stopped when a specified level of measurement uncertainty was attained. Using silicone O-ring test articles, the new method was compared with the standard method that permitted the downstream pressure to be non-constant atmospheric pressure. The two methods recorded comparable leak rates, but the new method recorded leak rates with significantly lower measurement uncertainty, statistical variance, and test duration. Utilizing this new method in leak rate quantification, projects will reduce cost and schedule, improve test results, and ease interpretation between data sets.

  16. Paleomagnetism.org - An online multi-platform and open source environment for paleomagnetic analysis.

    NASA Astrophysics Data System (ADS)

    Koymans, Mathijs; Langereis, Cor; Pastor-Galán, Daniel; van Hinsbergen, Douwe

    2017-04-01

    This contribution gives an overview of Paleomagnetism.org (Koymans et al., 2016), an online environment for paleomagnetic analysis. The application is developed in JavaScript and is fully open-sourced. It presents an interactive website in which paleomagnetic data can be interpreted, evaluated, visualized, and shared with others. The application has been available from late 2015 and since then has evolved with the addition of a magnetostratigraphic tool, additional input formats, and features that emphasize on the link between geomagnetism and tectonics. In the interpretation portal, principle component analysis (Kirschvink et al., 1981) can be applied on visualized demagnetization data (Zijderveld, 1967). Interpreted directions and great circles are combined using the iterative procedure described by (McFadden and McElhinny, 1988). The resulting directions can be further used in the statistics portal or exported as raw tabulated data and high-quality figures. The available tools in the statistics portal cover standard Fisher statistics for directional data and virtual geomagnetic poles (Fisher, 1953; Butler, 1992; Deenen et al., 2011). Other tools include the eigenvector approach foldtest (Tauxe and Watson, 1994), a bootstrapped reversal test (Tauxe et al., 2009), and the classical reversal test (McFadden and McElhinny, 1990). An implementation exists for the detection and correction of inclination shallowing in sediments (Tauxe and Kent, 2004; Tauxe et al., 2008), and a module to visualize apparent polar wander paths (Torsvik et al., 2012; Kent and Irving, 2010; Besse and Courtillot, 2002) for large continent-bearing plates. A miscellaneous portal exists for a set of tools that include a boostrapped oroclinal test (Pastor-Galán et al., 2016) for assessing possible linear relationships between strike and declination. Another tool that is available completes a net tectonic rotation analysis (after Morris et al., 1999) that restores a dyke to its paleo-vertical and can be used in determining paleo-spreading directions fundamental to plate reconstructions. Paleomagnetism.org provides an integrated approach for researchers to export and share paleomagnetic data through a common interface. The portals create a custom exportable file that can be distributed and included in public databases. With a publication, this file can be appended and would contain all paleomagnetic data discussed in the publication. The appended file can then be imported to the application by other researchers for reviewing. The accessibility and simplicity through which paleomagnetic data can be interpreted, analyzed, visualized, and shared should make Paleomagnetism.org of interest to the paleomagnetic and tectonic communities.

  17. Statistical interpretation of machine learning-based feature importance scores for biomarker discovery.

    PubMed

    Huynh-Thu, Vân Anh; Saeys, Yvan; Wehenkel, Louis; Geurts, Pierre

    2012-07-01

    Univariate statistical tests are widely used for biomarker discovery in bioinformatics. These procedures are simple, fast and their output is easily interpretable by biologists but they can only identify variables that provide a significant amount of information in isolation from the other variables. As biological processes are expected to involve complex interactions between variables, univariate methods thus potentially miss some informative biomarkers. Variable relevance scores provided by machine learning techniques, however, are potentially able to highlight multivariate interacting effects, but unlike the p-values returned by univariate tests, these relevance scores are usually not statistically interpretable. This lack of interpretability hampers the determination of a relevance threshold for extracting a feature subset from the rankings and also prevents the wide adoption of these methods by practicians. We evaluated several, existing and novel, procedures that extract relevant features from rankings derived from machine learning approaches. These procedures replace the relevance scores with measures that can be interpreted in a statistical way, such as p-values, false discovery rates, or family wise error rates, for which it is easier to determine a significance level. Experiments were performed on several artificial problems as well as on real microarray datasets. Although the methods differ in terms of computing times and the tradeoff, they achieve in terms of false positives and false negatives, some of them greatly help in the extraction of truly relevant biomarkers and should thus be of great practical interest for biologists and physicians. As a side conclusion, our experiments also clearly highlight that using model performance as a criterion for feature selection is often counter-productive. Python source codes of all tested methods, as well as the MATLAB scripts used for data simulation, can be found in the Supplementary Material.

  18. Computational intelligence approaches for pattern discovery in biological systems.

    PubMed

    Fogel, Gary B

    2008-07-01

    Biology, chemistry and medicine are faced by tremendous challenges caused by an overwhelming amount of data and the need for rapid interpretation. Computational intelligence (CI) approaches such as artificial neural networks, fuzzy systems and evolutionary computation are being used with increasing frequency to contend with this problem, in light of noise, non-linearity and temporal dynamics in the data. Such methods can be used to develop robust models of processes either on their own or in combination with standard statistical approaches. This is especially true for database mining, where modeling is a key component of scientific understanding. This review provides an introduction to current CI methods, their application to biological problems, and concludes with a commentary about the anticipated impact of these approaches in bioinformatics.

  19. The FORTRAN static source code analyzer program (SAP) user's guide, revision 1

    NASA Technical Reports Server (NTRS)

    Decker, W.; Taylor, W.; Eslinger, S.

    1982-01-01

    The FORTRAN Static Source Code Analyzer Program (SAP) User's Guide (Revision 1) is presented. SAP is a software tool designed to assist Software Engineering Laboratory (SEL) personnel in conducting studies of FORTRAN programs. SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. This document is a revision of the previous SAP user's guide, Computer Sciences Corporation document CSC/TM-78/6045. SAP Revision 1 is the result of program modifications to provide several new reports, additional complexity analysis, and recognition of all statements described in the FORTRAN 77 standard. This document provides instructions for operating SAP and contains information useful in interpreting SAP output.

  20. Analysis and interpretation of cost data in randomised controlled trials: review of published studies

    PubMed Central

    Barber, Julie A; Thompson, Simon G

    1998-01-01

    Objective To review critically the statistical methods used for health economic evaluations in randomised controlled trials where an estimate of cost is available for each patient in the study. Design Survey of published randomised trials including an economic evaluation with cost values suitable for statistical analysis; 45 such trials published in 1995 were identified from Medline. Main outcome measures The use of statistical methods for cost data was assessed in terms of the descriptive statistics reported, use of statistical inference, and whether the reported conclusions were justified. Results Although all 45 trials reviewed apparently had cost data for each patient, only 9 (20%) reported adequate measures of variability for these data and only 25 (56%) gave results of statistical tests or a measure of precision for the comparison of costs between the randomised groups. Only 16 (36%) of the articles gave conclusions which were justified on the basis of results presented in the paper. No paper reported sample size calculations for costs. Conclusions The analysis and interpretation of cost data from published trials reveal a lack of statistical awareness. Strong and potentially misleading conclusions about the relative costs of alternative therapies have often been reported in the absence of supporting statistical evidence. Improvements in the analysis and reporting of health economic assessments are urgently required. Health economic guidelines need to be revised to incorporate more detailed statistical advice. Key messagesHealth economic evaluations required for important healthcare policy decisions are often carried out in randomised controlled trialsA review of such published economic evaluations assessed whether statistical methods for cost outcomes have been appropriately used and interpretedFew publications presented adequate descriptive information for costs or performed appropriate statistical analysesIn at least two thirds of the papers, the main conclusions regarding costs were not justifiedThe analysis and reporting of health economic assessments within randomised controlled trials urgently need improving PMID:9794854

  1. On Some Assumptions of the Null Hypothesis Statistical Testing

    ERIC Educational Resources Information Center

    Patriota, Alexandre Galvão

    2017-01-01

    Bayesian and classical statistical approaches are based on different types of logical principles. In order to avoid mistaken inferences and misguided interpretations, the practitioner must respect the inference rules embedded into each statistical method. Ignoring these principles leads to the paradoxical conclusions that the hypothesis…

  2. Model averaging and muddled multimodel inferences

    USGS Publications Warehouse

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.

  3. Standardized quality-assessment system to evaluate pressure ulcer care in the nursing home.

    PubMed

    Bates-Jensen, Barbara M; Cadogan, Mary; Jorge, Jennifer; Schnelle, John F

    2003-09-01

    To demonstrate reliability and feasibility of a standardized protocol to assess and score quality indicators relevant to pressure ulcer (PU) care processes in nursing homes (NHs). Descriptive. Eight NHs. One hundred ninety-one NH residents for whom the PU Resident Assessment Protocol of the Minimum Data Set was initiated. Nine quality indicators (two related to screening and prevention of PU, two focused on assessment, and five addressing management) were scored using medical record data, direct human observation, and wireless thigh monitor observation data. Feasibility and reliability of medical record, observation, and thigh monitor protocols were determined. The percentage of participants who passed each of the indicators, indicating care consistent with practice guidelines, ranged from 0% to 98% across all indicators. In general, participants in NHs passed fewer indicators and had more problems with medical record accuracy before a PU was detected (screening/prevention indicators) than they did once an ulcer was documented (assessment and management indicators). Reliability of the medical record protocol showed kappa statistics ranging from 0.689 to 1.00 and percentage agreement from 80% to 100%. Direct observation protocols yielded kappa statistics of 0.979 and 0.928. Thigh monitor protocols showed kappa statistics ranging from 0.609 to 0.842. Training was variable, with the observation protocol requiring 1 to 2 hours, medical records requiring joint review of 20 charts with average time to complete the review of 20 minutes, and the thigh monitor data requiring 1 week for training in data preparation and interpretation. The standardized quality assessment system generated scores for nine PU quality indicators with good reliability and provided explicit scoring rules that permit reproducible conclusions about PU care. The focus of the indicators on care processes that are under the control of NH staff made the protocol useful for external survey and internal quality improvement purposes, and the thigh monitor observational technology provided a method for monitoring repositioning care processes that were otherwise difficult to monitor and manage.

  4. Criteria to Evaluate Interpretive Guides for Criterion-Referenced Tests

    ERIC Educational Resources Information Center

    Trapp, William J.

    2007-01-01

    This project provides a list of criteria for which the contents of interpretive guides written for customized, criterion-referenced tests can be evaluated. The criteria are based on the "Standards for Educational and Psychological Testing" (1999) and examine the content breadth of interpretive guides. Interpretive guides written for…

  5. Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial

    PubMed Central

    Hallgren, Kevin A.

    2012-01-01

    Many research designs require the assessment of inter-rater reliability (IRR) to demonstrate consistency among observational ratings provided by multiple coders. However, many studies use incorrect statistical procedures, fail to fully report the information necessary to interpret their results, or do not address how IRR affects the power of their subsequent analyses for hypothesis testing. This paper provides an overview of methodological issues related to the assessment of IRR with a focus on study design, selection of appropriate statistics, and the computation, interpretation, and reporting of some commonly-used IRR statistics. Computational examples include SPSS and R syntax for computing Cohen’s kappa and intra-class correlations to assess IRR. PMID:22833776

  6. `PROBABILISTIC Knowledge' as `OBJECTIVE Knowledge' in Quantum Mechanics: Potential Immanent Powers Instead of Actual Properties

    NASA Astrophysics Data System (ADS)

    Ronde, Christian De

    In classical physics, probabilistic or statistical knowledge has been always related to ignorance or inaccurate subjective knowledge about an actual state of affairs. This idea has been extended to quantum mechanics through a completely incoherent interpretation of the Fermi-Dirac and Bose-Einstein statistics in terms of "strange" quantum particles. This interpretation, naturalized through a widespread "way of speaking" in the physics community, contradicts Born's physical account of Ψ as a "probability wave" which provides statistical information about outcomes that, in fact, cannot be interpreted in terms of `ignorance about an actual state of affairs'. In the present paper we discuss how the metaphysics of actuality has played an essential role in limiting the possibilities of understating things differently. We propose instead a metaphysical scheme in terms of immanent powers with definite potentia which allows us to consider quantum probability in a new light, namely, as providing objective knowledge about a potential state of affairs.

  7. Efforts to improve international migration statistics: a historical perspective.

    PubMed

    Kraly, E P; Gnanasekaran, K S

    1987-01-01

    During the past decade, the international statistical community has made several efforts to develop standards for the definition, collection and publication of statistics on international migration. This article surveys the history of official initiatives to standardize international migration statistics by reviewing the recommendations of the International Statistical Institute, International Labor Organization, and the UN, and reports a recently proposed agenda for moving toward comparability among national statistical systems. Heightening awareness of the benefits of exchange and creating motivation to implement international standards requires a 3-pronged effort from the international statistical community. 1st, it is essential to continue discussion about the significance of improvement, specifically standardization, of international migration statistics. The move from theory to practice in this area requires ongoing focus by migration statisticians so that conformity to international standards itself becomes a criterion by which national statistical practices are examined and assessed. 2nd, the countries should be provided with technical documentation to support and facilitate the implementation of the recommended statistical systems. Documentation should be developed with an understanding that conformity to international standards for migration and travel statistics must be achieved within existing national statistical programs. 3rd, the call for statistical research in this area requires more efforts by the community of migration statisticians, beginning with the mobilization of bilateral and multilateral resources to undertake the preceding list of activities.

  8. 75 FR 37245 - 2010 Standards for Delineating Metropolitan and Micropolitan Statistical Areas

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-28

    ... Micropolitan Statistical Areas; Notice #0;#0;Federal Register / Vol. 75, No. 123 / Monday, June 28, 2010... and Micropolitan Statistical Areas AGENCY: Office of Information and Regulatory Affairs, Office of... Statistical Areas. The 2010 standards replace and supersede the 2000 Standards for Defining Metropolitan and...

  9. Data precision of X-ray fluorescence (XRF) scanning of discrete samples with the ITRAX XRF core-scanner exemplified on loess-paleosol samples

    NASA Astrophysics Data System (ADS)

    Profe, Jörn; Ohlendorf, Christian

    2017-04-01

    XRF-scanning is the state-of-the-art technique for geochemical analyses in marine and lacustrine sedimentology for more than a decade. However, little attention has been paid to data precision and technical limitations so far. Using homogenized, dried and powdered samples (certified geochemical reference standards and samples from a lithologically-contrasting loess-paleosol sequence) minimizes many adverse effects that influence the XRF-signal when analyzing wet sediment cores. This allows the investigation of data precision under ideal conditions and documents a new application of the XRF core-scanner technology at the same time. Reliable interpretations of XRF results require data precision evaluation of single elements as a function of X-ray tube, measurement time, sample compaction and quality of peak fitting. Ten-fold measurement of each sample constitutes data precision. Data precision of XRF measurements theoretically obeys Poisson statistics. Fe and Ca exhibit largest deviations from Poisson statistics. The same elements show the least mean relative standard deviations in the range from 0.5% to 1%. This represents the technical limit of data precision achievable by the installed detector. Measurement times ≥ 30 s reveal mean relative standard deviations below 4% for most elements. The quality of peak fitting is only relevant for elements with overlapping fluorescence lines such as Ba, Ti and Mn or for elements with low concentrations such as Y, for example. Differences in sample compaction are marginal and do not change mean relative standard deviation considerably. Data precision is in the range reported for geochemical reference standards measured by conventional techniques. Therefore, XRF scanning of discrete samples provide a cost- and time-efficient alternative to conventional multi-element analyses. As best trade-off between economical operation and data quality, we recommend a measurement time of 30 s resulting in a total scan time of 30 minutes for 30 samples.

  10. A drift correction optimization technique for the reduction of the inter-measurement dispersion of isotope ratios measured using a multi-collector plasma mass spectrometer

    NASA Astrophysics Data System (ADS)

    Doherty, W.; Lightfoot, P. C.; Ames, D. E.

    2014-08-01

    The effects of polynomial interpolation and internal standardization drift corrections on the inter-measurement dispersion (statistical) of isotope ratios measured with a multi-collector plasma mass spectrometer were investigated using the (analyte, internal standard) isotope systems of (Ni, Cu), (Cu, Ni), (Zn, Cu), (Zn, Ga), (Sm, Eu), (Hf, Re) and (Pb, Tl). The performance of five different correction factors was compared using a (statistical) range based merit function ωm which measures the accuracy and inter-measurement range of the instrument calibration. The frequency distribution of optimal correction factors over two hundred data sets uniformly favored three particular correction factors while the remaining two correction factors accounted for a small but still significant contribution to the reduction of the inter-measurement dispersion. Application of the merit function is demonstrated using the detection of Cu and Ni isotopic fractionation in laboratory and geologic-scale chemical reactor systems. Solvent extraction (diphenylthiocarbazone (Cu, Pb) and dimethylglyoxime (Ni) was used to either isotopically fractionate the metal during extraction using the method of competition or to isolate the Cu and Ni from the sample (sulfides and associated silicates). In the best case, differences in isotopic composition of ± 3 in the fifth significant figure could be routinely and reliably detected for Cu65/63 and Ni61/62. One of the internal standardization drift correction factors uses a least squares estimator to obtain a linear functional relationship between the measured analyte and internal standard isotope ratios. Graphical analysis demonstrates that the points on these graphs are defined by highly non-linear parametric curves and not two linearly correlated quantities which is the usual interpretation of these graphs. The success of this particular internal standardization correction factor was found in some cases to be due to a fortuitous, scale dependent, parametric curve effect.

  11. STRengthening analytical thinking for observational studies: the STRATOS initiative.

    PubMed

    Sauerbrei, Willi; Abrahamowicz, Michal; Altman, Douglas G; le Cessie, Saskia; Carpenter, James

    2014-12-30

    The validity and practical utility of observational medical research depends critically on good study design, excellent data quality, appropriate statistical methods and accurate interpretation of results. Statistical methodology has seen substantial development in recent times. Unfortunately, many of these methodological developments are ignored in practice. Consequently, design and analysis of observational studies often exhibit serious weaknesses. The lack of guidance on vital practical issues discourages many applied researchers from using more sophisticated and possibly more appropriate methods when analyzing observational studies. Furthermore, many analyses are conducted by researchers with a relatively weak statistical background and limited experience in using statistical methodology and software. Consequently, even 'standard' analyses reported in the medical literature are often flawed, casting doubt on their results and conclusions. An efficient way to help researchers to keep up with recent methodological developments is to develop guidance documents that are spread to the research community at large. These observations led to the initiation of the strengthening analytical thinking for observational studies (STRATOS) initiative, a large collaboration of experts in many different areas of biostatistical research. The objective of STRATOS is to provide accessible and accurate guidance in the design and analysis of observational studies. The guidance is intended for applied statisticians and other data analysts with varying levels of statistical education, experience and interests. In this article, we introduce the STRATOS initiative and its main aims, present the need for guidance documents and outline the planned approach and progress so far. We encourage other biostatisticians to become involved. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  12. Using volcano plots and regularized-chi statistics in genetic association studies.

    PubMed

    Li, Wentian; Freudenberg, Jan; Suh, Young Ju; Yang, Yaning

    2014-02-01

    Labor intensive experiments are typically required to identify the causal disease variants from a list of disease associated variants in the genome. For designing such experiments, candidate variants are ranked by their strength of genetic association with the disease. However, the two commonly used measures of genetic association, the odds-ratio (OR) and p-value may rank variants in different order. To integrate these two measures into a single analysis, here we transfer the volcano plot methodology from gene expression analysis to genetic association studies. In its original setting, volcano plots are scatter plots of fold-change and t-test statistic (or -log of the p-value), with the latter being more sensitive to sample size. In genetic association studies, the OR and Pearson's chi-square statistic (or equivalently its square root, chi; or the standardized log(OR)) can be analogously used in a volcano plot, allowing for their visual inspection. Moreover, the geometric interpretation of these plots leads to an intuitive method for filtering results by a combination of both OR and chi-square statistic, which we term "regularized-chi". This method selects associated markers by a smooth curve in the volcano plot instead of the right-angled lines which corresponds to independent cutoffs for OR and chi-square statistic. The regularized-chi incorporates relatively more signals from variants with lower minor-allele-frequencies than chi-square test statistic. As rare variants tend to have stronger functional effects, regularized-chi is better suited to the task of prioritization of candidate genes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Exploratory Visual Analysis of Statistical Results from Microarray Experiments Comparing High and Low Grade Glioma

    PubMed Central

    Reif, David M.; Israel, Mark A.; Moore, Jason H.

    2007-01-01

    The biological interpretation of gene expression microarray results is a daunting challenge. For complex diseases such as cancer, wherein the body of published research is extensive, the incorporation of expert knowledge provides a useful analytical framework. We have previously developed the Exploratory Visual Analysis (EVA) software for exploring data analysis results in the context of annotation information about each gene, as well as biologically relevant groups of genes. We present EVA as a flexible combination of statistics and biological annotation that provides a straightforward visual interface for the interpretation of microarray analyses of gene expression in the most commonly occuring class of brain tumors, glioma. We demonstrate the utility of EVA for the biological interpretation of statistical results by analyzing publicly available gene expression profiles of two important glial tumors. The results of a statistical comparison between 21 malignant, high-grade glioblastoma multiforme (GBM) tumors and 19 indolent, low-grade pilocytic astrocytomas were analyzed using EVA. By using EVA to examine the results of a relatively simple statistical analysis, we were able to identify tumor class-specific gene expression patterns having both statistical and biological significance. Our interactive analysis highlighted the potential importance of genes involved in cell cycle progression, proliferation, signaling, adhesion, migration, motility, and structure, as well as candidate gene loci on a region of Chromosome 7 that has been implicated in glioma. Because EVA does not require statistical or computational expertise and has the flexibility to accommodate any type of statistical analysis, we anticipate EVA will prove a useful addition to the repertoire of computational methods used for microarray data analysis. EVA is available at no charge to academic users and can be found at http://www.epistasis.org. PMID:19390666

  14. A Fair and Balance Approach to the Mean

    ERIC Educational Resources Information Center

    Peters, Susan A.; Bennett, Victoria Miller; Young, Mandy; Watkins, Jonathan D.

    2016-01-01

    The mean can be interpreted as a fair-share value and as a balance point. Standards documents, including Common Core State Standards for Mathematics (CCSSM) (CCSSI 2010), suggest focusing on both interpretations. In this article, the authors propose a sequence of five activities to help students develop these understandings of the mean, and they…

  15. 77 FR 33748 - International Conference on Harmonisation; Guidance on S2(R1) Genotoxicity Testing and Data...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-07

    ... standard genetic toxicology battery for prediction of potential human risks, and on interpreting results... followup testing and interpretation of positive results in vitro and in vivo in the standard genetic... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2008-D-0178...

  16. Statistical Hypothesis Testing in Intraspecific Phylogeography: NCPA versus ABC

    PubMed Central

    Templeton, Alan R.

    2009-01-01

    Nested clade phylogeographic analysis (NCPA) and approximate Bayesian computation (ABC) have been used to test phylogeographic hypotheses. Multilocus NCPA tests null hypotheses, whereas ABC discriminates among a finite set of alternatives. The interpretive criteria of NCPA are explicit and allow complex models to be built from simple components. The interpretive criteria of ABC are ad hoc and require the specification of a complete phylogeographic model. The conclusions from ABC are often influenced by implicit assumptions arising from the many parameters needed to specify a complex model. These complex models confound many assumptions so that biological interpretations are difficult. Sampling error is accounted for in NCPA, but ABC ignores important sources of sampling error that creates pseudo-statistical power. NCPA generates the full sampling distribution of its statistics, but ABC only yields local probabilities, which in turn make it impossible to distinguish between a good fitting model, a non-informative model, and an over-determined model. Both NCPA and ABC use approximations, but convergences of the approximations used in NCPA are well defined whereas those in ABC are not. NCPA can analyze a large number of locations, but ABC cannot. Finally, the dimensionality of tested hypothesis is known in NCPA, but not for ABC. As a consequence, the “probabilities” generated by ABC are not true probabilities and are statistically non-interpretable. Accordingly, ABC should not be used for hypothesis testing, but simulation approaches are valuable when used in conjunction with NCPA or other methods that do not rely on highly parameterized models. PMID:19192182

  17. Crunching Numbers: What Cancer Screening Statistics Really Tell Us

    Cancer.gov

    Cancer screening studies have shown that more screening does not necessarily translate into fewer cancer deaths. This article explains how to interpret the statistics used to describe the results of screening studies.

  18. The Malpractice of Statistical Interpretation

    ERIC Educational Resources Information Center

    Fraas, John W.; Newman, Isadore

    1978-01-01

    Problems associated with the use of gain scores, analysis of covariance, multicollinearity, part and partial correlation, and the lack of rectilinearity in regression are discussed. Particular attention is paid to the misuse of statistical techniques. (JKS)

  19. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling.

    PubMed

    Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall

    2016-01-01

    Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.

  20. The effect of sensor sheltering and averaging techniques on wind measurements at the Shuttle Landing Facility

    NASA Technical Reports Server (NTRS)

    Merceret, Francis J.

    1995-01-01

    This document presents results of a field study of the effect of sheltering of wind sensors by nearby foliage on the validity of wind measurements at the Space Shuttle Landing Facility (SLF). Standard measurements are made at one second intervals from 30-feet (9.1-m) towers located 500 feet (152 m) from the SLF centerline. The centerline winds are not exactly the same as those measured by the towers. A companion study, Merceret (1995), quantifies the differences as a function of statistics of the observed winds and distance between the measurements and points of interest. This work examines the effect of nearby foliage on the accuracy of the measurements made by any one sensor, and the effects of averaging on interpretation of the measurements. The field program used logarithmically spaced portable wind towers to measure wind speed and direction over a range of conditions as a function of distance from the obstructing foliage. Appropriate statistics were computed. The results suggest that accurate measurements require foliage be cut back to OFCM standards. Analysis of averaging techniques showed that there is no significant difference between vector and scalar averages. Longer averaging periods reduce measurement error but do not otherwise change the measurement in reasonably steady flow regimes. In rapidly changing conditions, shorter averaging periods may be required to capture trends.

  1. Epidemiological cut-off values for Flavobacterium psychrophilum MIC data generated by a standard test protocol.

    PubMed

    Smith, P; Endris, R; Kronvall, G; Thomas, V; Verner-Jeffreys, D; Wilhelm, C; Dalsgaard, I

    2016-02-01

    Epidemiological cut-off values were developed for application to antibiotic susceptibility data for Flavobacterium psychrophilum generated by standard CLSI test protocols. The MIC values for ten antibiotic agents against Flavobacterium psychrophilum were determined in two laboratories. For five antibiotics, the data sets were of sufficient quality and quantity to allow the setting of valid epidemiological cut-off values. For these agents, the cut-off values, calculated by the application of the statistically based normalized resistance interpretation method, were ≤16 mg L(-1) for erythromycin, ≤2 mg L(-1) for florfenicol, ≤0.025 mg L(-1) for oxolinic acid (OXO), ≤0.125 mg L(-1) for oxytetracycline and ≤20 (1/19) mg L(-1) for trimethoprim/sulphamethoxazole. For ampicillin and amoxicillin, the majority of putative wild-type observations were 'off scale', and therefore, statistically valid cut-off values could not be calculated. For ormetoprim/sulphadimethoxine, the data were excessively diverse and a valid cut-off could not be determined. For flumequine, the putative wild-type data were extremely skewed, and for enrofloxacin, there was inadequate separation in the MIC values for putative wild-type and non-wild-type strains. It is argued that the adoption of OXO as a class representative for the quinolone group would be a valid method of determining susceptibilities to these agents. © 2014 John Wiley & Sons Ltd.

  2. Potential sources of variability in mesocosm experiments on the response of phytoplankton to ocean acidification

    NASA Astrophysics Data System (ADS)

    Moreno de Castro, Maria; Schartau, Markus; Wirtz, Kai

    2017-04-01

    Mesocosm experiments on phytoplankton dynamics under high CO2 concentrations mimic the response of marine primary producers to future ocean acidification. However, potential acidification effects can be hindered by the high standard deviation typically found in the replicates of the same CO2 treatment level. In experiments with multiple unresolved factors and a sub-optimal number of replicates, post-processing statistical inference tools might fail to detect an effect that is present. We propose that in such cases, data-based model analyses might be suitable tools to unearth potential responses to the treatment and identify the uncertainties that could produce the observed variability. As test cases, we used data from two independent mesocosm experiments. Both experiments showed high standard deviations and, according to statistical inference tools, biomass appeared insensitive to changing CO2 conditions. Conversely, our simulations showed earlier and more intense phytoplankton blooms in modeled replicates at high CO2 concentrations and suggested that uncertainties in average cell size, phytoplankton biomass losses, and initial nutrient concentration potentially outweigh acidification effects by triggering strong variability during the bloom phase. We also estimated the thresholds below which uncertainties do not escalate to high variability. This information might help in designing future mesocosm experiments and interpreting controversial results on the effect of acidification or other pressures on ecosystem functions.

  3. Introduction of a Journal Excerpt Activity Improves Undergraduate Students' Performance in Statistics

    ERIC Educational Resources Information Center

    Rabin, Laura A.; Nutter-Upham, Katherine E.

    2010-01-01

    We describe an active learning exercise intended to improve undergraduate students' understanding of statistics by grounding complex concepts within a meaningful, applied context. Students in a journal excerpt activity class read brief excerpts of statistical reporting from published research articles, answered factual and interpretive questions,…

  4. The Effect Size Statistic: Overview of Various Choices.

    ERIC Educational Resources Information Center

    Mahadevan, Lakshmi

    Over the years, methodologists have been recommending that researchers use magnitude of effect estimates in result interpretation to highlight the distinction between statistical and practical significance (cf. R. Kirk, 1996). A magnitude of effect statistic (i.e., effect size) tells to what degree the dependent variable can be controlled,…

  5. A Critique of Divorce Statistics and Their Interpretation.

    ERIC Educational Resources Information Center

    Crosby, John F.

    1980-01-01

    Increasingly, appeals to the divorce statistic are employed to substantiate claims that the family is in a state of breakdown and marriage is passe. This article contains a consideration of reasons why the divorce statistics are invalid and/or unreliable as indicators of the present state of marriage and family. (Author)

  6. Chi-Square Statistics, Tests of Hypothesis and Technology.

    ERIC Educational Resources Information Center

    Rochowicz, John A.

    The use of technology such as computers and programmable calculators enables students to find p-values and conduct tests of hypotheses in many different ways. Comprehension and interpretation of a research problem become the focus for statistical analysis. This paper describes how to calculate chisquare statistics and p-values for statistical…

  7. SOCR: Statistics Online Computational Resource

    ERIC Educational Resources Information Center

    Dinov, Ivo D.

    2006-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an…

  8. Statistical Significance Testing in Second Language Research: Basic Problems and Suggestions for Reform

    ERIC Educational Resources Information Center

    Norris, John M.

    2015-01-01

    Traditions of statistical significance testing in second language (L2) quantitative research are strongly entrenched in how researchers design studies, select analyses, and interpret results. However, statistical significance tests using "p" values are commonly misinterpreted by researchers, reviewers, readers, and others, leading to…

  9. The Role of the Sampling Distribution in Understanding Statistical Inference

    ERIC Educational Resources Information Center

    Lipson, Kay

    2003-01-01

    Many statistics educators believe that few students develop the level of conceptual understanding essential for them to apply correctly the statistical techniques at their disposal and to interpret their outcomes appropriately. It is also commonly believed that the sampling distribution plays an important role in developing this understanding.…

  10. ALISE Library and Information Science Education Statistical Report, 1999.

    ERIC Educational Resources Information Center

    Daniel, Evelyn H., Ed.; Saye, Jerry D., Ed.

    This volume is the twentieth annual statistical report on library and information science (LIS) education published by the Association for Library and Information Science Education (ALISE). Its purpose is to compile, analyze, interpret, and report statistical (and other descriptive) information about library/information science programs offered by…

  11. Does periodic lung screening of films meets standards?

    PubMed Central

    Binay, Songul; Arbak, Peri; Safak, Alp Alper; Balbay, Ege Gulec; Bilgin, Cahit; Karatas, Naciye

    2016-01-01

    Objective: To determine whether the workers’ periodic chest x-ray screening techniques in accordance with the quality standards is the responsibility of physicians. Evaluation of differences of interpretations by physicians in different levels of education and the importance of standardization of interpretation. Methods: Previously taken chest radiographs of 400 workers who are working in a factory producing the glass run channels were evaluated according to technical and quality standards by three observers (pulmonologist, radiologist, pulmonologist assistant). There was a perfect concordance between radiologist and pulmonologist for the underpenetrated films. Whereas there was perfect concordance between pulmonologist and pulmonologist assistant for over penetrated films. Results: Pulmonologist (52%) has interpreted the dose of the films as regular more than other observers (radiologist; 44.3%, pulmonologist assistant; 30.4%). The frequency of interpretation of the films as taken in inspiratory phase by the pulmonologist (81.7%) was less than other observers (radiologist; 92.1%, pulmonologist assistant; 92.6%). The rate of the pulmonologist (53.5%) was higher than the other observers (radiologist; 44.6%, pulmonologist assistant; 41.8%) for the assessment of the positioning of the patients as symmetrical. Pulmonologist assistant (15.3%) was the one who most commonly reported the parenchymal findings (radiologist; 2.2%, pulmonologist; 12.9%). Conclusion: It is necessary to reorganize the technical standards and exposure procedures for improving the quality of the chest radiographs. The reappraisal of all interpreters and continuous training of technicians is required. PMID:28083054

  12. Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Udey, Ruth Norma

    Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.

  13. Hematoxylin and Eosin Counterstaining Protocol for Immunohistochemistry Interpretation and Diagnosis.

    PubMed

    Grosset, Andrée-Anne; Loayza-Vega, Kevin; Adam-Granger, Éloïse; Birlea, Mirela; Gilks, Blake; Nguyen, Bich; Soucy, Geneviève; Tran-Thanh, Danh; Albadine, Roula; Trudel, Dominique

    2017-12-21

    Hematoxylin and eosin (H&E) staining is a well-established technique in histopathology. However, immunohistochemistry (IHC) interpretation is done exclusively with hematoxylin counterstaining. Our goal was to investigate the potential of H&E as counterstaining (H&E-IHC) to allow for visualization of a marker while confirming the diagnosis on the same slide. The quality of immunostaining and the fast-technical performance were the main criteria to select the final protocol. We stained multiple diagnostic tissues with class I IHC tests with different subcellular localization markers (anti-CK7, CK20, synaptophysin, CD20, HMB45, and Ki-67) and with double-staining on prostate tissues with anti-high molecular weight keratins/p63 (DAB detection) and p504s (alkaline phosphatase detection). To validate the efficacy of the counterstaining, we stained tissue microarrays from the Canadian Immunohistochemistry Quality Control (cIQc) with class II IHC tests (ER, PR, HER2, and p53 markers). Interobserver and intraobserver concordance was assessed by κ statistics. Excellent agreement of H&E-IHC interpretation was observed in comparison with standard IHC from our laboratory (κ, 0.87 to 1.00), and with the cIQc reference values (κ, 0.81 to 1.00). Interobserver and intraobserver agreement was excellent (κ, 0.89 to 1.00 and 0.87 to 1.00, respectively). We therefore show for the first time the potential of using H&E counterstaining for IHC interpretation. We recommend the H&E-IHC protocol to enhance diagnostic precision for the clinical workflow and research studies.

  14. Do doctors need statistics? Doctors' use of and attitudes to probability and statistics.

    PubMed

    Swift, Louise; Miles, Susan; Price, Gill M; Shepstone, Lee; Leinster, Sam J

    2009-07-10

    There is little published evidence on what doctors do in their work that requires probability and statistics, yet the General Medical Council (GMC) requires new doctors to have these skills. This study investigated doctors' use of and attitudes to probability and statistics with a view to informing undergraduate teaching.An email questionnaire was sent to 473 clinicians with an affiliation to the University of East Anglia's Medical School.Of 130 respondents approximately 90 per cent of doctors who performed each of the following activities found probability and statistics useful for that activity: accessing clinical guidelines and evidence summaries, explaining levels of risk to patients, assessing medical marketing and advertising material, interpreting the results of a screening test, reading research publications for general professional interest, and using research publications to explore non-standard treatment and management options.Seventy-nine per cent (103/130, 95 per cent CI 71 per cent, 86 per cent) of participants considered probability and statistics important in their work. Sixty-three per cent (78/124, 95 per cent CI 54 per cent, 71 per cent) said that there were activities that they could do better or start doing if they had an improved understanding of these areas and 74 of these participants elaborated on this. Themes highlighted by participants included: being better able to critically evaluate other people's research; becoming more research-active, having a better understanding of risk; and being better able to explain things to, or teach, other people.Our results can be used to inform how probability and statistics should be taught to medical undergraduates and should encourage today's medical students of the subjects' relevance to their future careers. Copyright 2009 John Wiley & Sons, Ltd.

  15. ARBOOK: Development and Assessment of a Tool Based on Augmented Reality for Anatomy

    NASA Astrophysics Data System (ADS)

    Ferrer-Torregrosa, J.; Torralba, J.; Jimenez, M. A.; García, S.; Barcia, J. M.

    2015-02-01

    The evolution of technologies and the development of new tools with educational purposes are growing up. This work presents the experience of a new tool based on augmented reality (AR) focusing on the anatomy of the lower limb. ARBOOK was constructed and developed based on TC and MRN images, dissections and drawings. For ARBOOK evaluation, a specific questionnaire of three blocks was performed and validated according to the Delphi method. The questionnaire included motivation and attention tasks, autonomous work and three-dimensional interpretation tasks. A total of 211 students from 7 public and private Spanish universities were divided in two groups. Control group received standard teaching sessions supported by books, and video. The ARBOOK group received the same standard sessions but additionally used the ARBOOK tool. At the end of the training, a written test on lower limb anatomy was done by students. Statistically significant better scorings for the ARBOOK group were found on attention-motivation, autonomous work and three-dimensional comprehension tasks. Additionally, significantly better scoring was obtained by the ARBOOK group in the written test. The results strongly suggest that the use of AR is suitable for anatomical purposes. Concretely, the results indicate how this technology is helpful for student motivation, autonomous work or spatial interpretation. The use of this type of technologies must be taken into account even more at the present moment, when new technologies are naturally incorporated to our current lives.

  16. 16 CFR 1201.40 - Interpretation concerning bathtub and shower doors and enclosures.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Interpretation concerning bathtub and shower... Policy and Interpretation § 1201.40 Interpretation concerning bathtub and shower doors and enclosures. (a... and enclosures” and “shower door and enclosure” as they are used in the Standard in subpart A. The...

  17. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  18. Standards and guidelines for HIV prevention research: considerations for local context in the interpretation of global ethical standards.

    PubMed

    Haire, Bridget G; Folayan, Morenike Oluwatoyin; Brown, Brandon

    2014-09-01

    While international standards are important for conducting clinical research, they may require interpretation in particular contexts. Standard of care in HIV prevention research is now complicated, given that there are now two new biomedical prevention interventions - 'treatment-as-prevention', and pre-exposure prophylaxis--in addition to barrier protection, counselling, male circumcision and treatment of sexually transmissible infections. Proper standards of care must be considered with regard to both normative guidance and the circumstances of the particular stakeholders--the community, trial population, researchers and sponsors. In addition, the special circumstances of the lives of participants need to be acknowledged in designing trial protocols and study procedures. When researchers are faced with the dilemma of interpretation of international ethics guidelines and the realities of the daily lives of persons and their practices, the decisions of the local ethics committee become crucial. The challenge then becomes how familiar ethics committee members in these local settings are with these guidelines, and how their interpretation and use in the local context ensures the respect for persons and communities. It also includes justice and the fair selection of study participants without compromising data quality, and ensuring that the risks for study participants and their community do not outweigh the potential benefits.

  19. Data exploration systems for databases

    NASA Technical Reports Server (NTRS)

    Greene, Richard J.; Hield, Christopher

    1992-01-01

    Data exploration systems apply machine learning techniques, multivariate statistical methods, information theory, and database theory to databases to identify significant relationships among the data and summarize information. The result of applying data exploration systems should be a better understanding of the structure of the data and a perspective of the data enabling an analyst to form hypotheses for interpreting the data. This paper argues that data exploration systems need a minimum amount of domain knowledge to guide both the statistical strategy and the interpretation of the resulting patterns discovered by these systems.

  20. The need for performance criteria in evaluating the durability of wood products

    Treesearch

    Stan Lebow; Bessie Woodward; Patricia Lebow; Carol Clausen

    2010-01-01

    Data generated from wood-product durability evaluations can be difficult to interpret. Standard methods used to evaluate the potential long-term durability of wood products often provide little guidance on interpretation of test results. Decisions on acceptable performance for standardization and code compliance are based on the judgment of reviewers or committees....

  1. Assessing Resilience in Students Who Are Deaf or Blind: Supplementing Standardized Achievement Testing

    ERIC Educational Resources Information Center

    Butler, Michelle A.; Katayama, Andrew D.; Schindling, Casey; Dials, Katherine

    2018-01-01

    Although testing accommodations for standardized assessments are available for students with disabilities, interpretation remains challenging. The authors explored resilience to see if it could contribute to the interpretation of academic success for students who are deaf or hard of hearing or blind or have low vision. High school students (30…

  2. Comparative interpretations of renormalization inversion technique for reconstructing unknown emissions from measured atmospheric concentrations

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory

    2017-04-01

    The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.

  3. Evaluating Structural Equation Models for Categorical Outcomes: A New Test Statistic and a Practical Challenge of Interpretation.

    PubMed

    Monroe, Scott; Cai, Li

    2015-01-01

    This research is concerned with two topics in assessing model fit for categorical data analysis. The first topic involves the application of a limited-information overall test, introduced in the item response theory literature, to structural equation modeling (SEM) of categorical outcome variables. Most popular SEM test statistics assess how well the model reproduces estimated polychoric correlations. In contrast, limited-information test statistics assess how well the underlying categorical data are reproduced. Here, the recently introduced C2 statistic of Cai and Monroe (2014) is applied. The second topic concerns how the root mean square error of approximation (RMSEA) fit index can be affected by the number of categories in the outcome variable. This relationship creates challenges for interpreting RMSEA. While the two topics initially appear unrelated, they may conveniently be studied in tandem since RMSEA is based on an overall test statistic, such as C2. The results are illustrated with an empirical application to data from a large-scale educational survey.

  4. Assessing the suitability of fractional polynomial methods in health services research: a perspective on the categorization epidemic.

    PubMed

    Williams, Jennifer Stewart

    2011-07-01

    To show how fractional polynomial methods can usefully replace the practice of arbitrarily categorizing data in epidemiology and health services research. A health service setting is used to illustrate a structured and transparent way of representing non-linear data without arbitrary grouping. When age is a regressor its effects on an outcome will be interpreted differently depending upon the placing of cutpoints or the use of a polynomial transformation. Although it is common practice, categorization comes at a cost. Information is lost, and accuracy and statistical power reduced, leading to spurious statistical interpretation of the data. The fractional polynomial method is widely supported by statistical software programs, and deserves greater attention and use.

  5. Biostatistical analysis of quantitative immunofluorescence microscopy images.

    PubMed

    Giles, C; Albrecht, M A; Lam, V; Takechi, R; Mamo, J C

    2016-12-01

    Semiquantitative immunofluorescence microscopy has become a key methodology in biomedical research. Typical statistical workflows are considered in the context of avoiding pseudo-replication and marginalising experimental error. However, immunofluorescence microscopy naturally generates hierarchically structured data that can be leveraged to improve statistical power and enrich biological interpretation. Herein, we describe a robust distribution fitting procedure and compare several statistical tests, outlining their potential advantages/disadvantages in the context of biological interpretation. Further, we describe tractable procedures for power analysis that incorporates the underlying distribution, sample size and number of images captured per sample. The procedures outlined have significant potential for increasing understanding of biological processes and decreasing both ethical and financial burden through experimental optimization. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  6. Bayesian theories of conditioning in a changing world.

    PubMed

    Courville, Aaron C; Daw, Nathaniel D; Touretzky, David S

    2006-07-01

    The recent flowering of Bayesian approaches invites the re-examination of classic issues in behavior, even in areas as venerable as Pavlovian conditioning. A statistical account can offer a new, principled interpretation of behavior, and previous experiments and theories can inform many unexplored aspects of the Bayesian enterprise. Here we consider one such issue: the finding that surprising events provoke animals to learn faster. We suggest that, in a statistical account of conditioning, surprise signals change and therefore uncertainty and the need for new learning. We discuss inference in a world that changes and show how experimental results involving surprise can be interpreted from this perspective, and also how, thus understood, these phenomena help constrain statistical theories of animal and human learning.

  7. Forest statistics of western Kentucky

    Treesearch

    The Forest Survey Organization Central States Forest Experiment Station

    1950-01-01

    This Survey Release presents the more significant preliminary statistics on the forest area and timber volume for the western region of Kentucky. Similar reports for the remainder of the state will be published as soon as statistical tabulations are completed. Later, an analytical report for the state will be published which will interpret forest area, timber volume,...

  8. Forest statistics of southern Indiana

    Treesearch

    The Forest Survey Organization Central States Forest Experiment Station

    1951-01-01

    This Survey Release presents the more significant preliminary statistics on the forest area and timber volume for each of the three regions of southern Indiana. A similar report will be published for the two northern Indiana regions. Later, an analytical report for the state will be published which will interpret statistics on forest area, timber- volume, growth, and...

  9. Simulation Study of Evacuation Control Center Operations Analysis

    DTIC Science & Technology

    2011-06-01

    28 4.3 Baseline Manning (Runs 1, 2, & 3) . . . . . . . . . . . . 30 4.3.1 Baseline Statistics Interpretation...46 Appendix B. Key Statistic Matrix: Runs 1-12 . . . . . . . . . . . . . 48 Appendix C. Blue Dart...Completion Time . . . 33 11. Paired T result - Run 5 v. Run 6: ECC Completion Time . . . 35 12. Key Statistics : Run 3 vs. Run 9

  10. Issues affecting the interpretation of eastern hardwood resource statistics

    Treesearch

    William G. Luppold; William H. McWilliams

    2000-01-01

    Forest inventory statistics developed by the USDA Forest Service are used by customers ranging from forest industry to state and local economic development groups. In recent years, these statistics have been used increasingly to justify greater utilization of the eastem hardwood resource or to evaluate the sustainability of expanding demand for hardwood roundwood and...

  11. DOE interpretations Guide to OSH standards. Update to the Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-03-31

    Reflecting Secretary O`Leary`s focus on occupational safety and health, the Office of Occupational Safety is pleased to provide you with the latest update to the DOE Interpretations Guide to OSH Standards. This Guide was developed in cooperation with the Occupational Safety and Health Administration, which continued its support during this last revision by facilitating access to the interpretations found on the OSHA Computerized Information System (OCIS). This March 31, 1994 update contains 123 formal interpretation letters written OSHA. As a result of the unique requests received by the 1-800 Response Line, this update also contains 38 interpretations developed by DOE.more » This new occupational safety and health information adds still more important guidance to the four volume reference set that you presently have in your possession.« less

  12. Geovisual analytics to enhance spatial scan statistic interpretation: an analysis of U.S. cervical cancer mortality

    PubMed Central

    Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; MacEachren, Alan M

    2008-01-01

    Background Kulldorff's spatial scan statistic and its software implementation – SaTScan – are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. Results We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. Conclusion The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. Method We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit. PMID:18992163

  13. Geovisual analytics to enhance spatial scan statistic interpretation: an analysis of U.S. cervical cancer mortality.

    PubMed

    Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; Maceachren, Alan M

    2008-11-07

    Kulldorff's spatial scan statistic and its software implementation - SaTScan - are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit.

  14. Variability in hematology of white-spotted bamboo sharks (Chiloscyllium plagiosum) in different living environments.

    PubMed

    Parkinson, Lily A; Alexander, Amy B; Campbell, Terry W

    2017-07-01

    Elasmobranch hematology continues to reveal new peculiarities within this specialized field. This report compares total hematologic values from the same white-spotted bamboo sharks (Chiloscyllium plagiosum) housed in different environments. We compared the hemograms one year apart, using a standardized Natt-Herrick's technique. The total white blood cell (WBC) counts of the sharks were statistically different between the two time points (initial median total WBC count = 18,920 leukocytes/μl, SD = 8,108; 1 year later total WBC count = 1,815 leukocytes/μl, SD = 1,309). The packed cell volumes were additionally found to be statistically different (19%, SD = 2.9 vs. 22%, SD = 2.0). Analysis revealed the only differences between the time points were the temperature and stocking densities at which these sharks were housed. This report emphasizes the need for a thorough understanding of the husbandry of an elasmobranch prior to interpretation of a hemogram and suggests that reference intervals should be created for each environment. © 2017 Wiley Periodicals, Inc.

  15. Note onset deviations as musical piece signatures.

    PubMed

    Serrà, Joan; Özaslan, Tan Hakan; Arcos, Josep Lluis

    2013-01-01

    A competent interpretation of a musical composition presents several non-explicit departures from the written score. Timing variations are perhaps the most important ones: they are fundamental for expressive performance and a key ingredient for conferring a human-like quality to machine-based music renditions. However, the nature of such variations is still an open research question, with diverse theories that indicate a multi-dimensional phenomenon. In the present study, we consider event-shift timing variations and show that sequences of note onset deviations are robust and reliable predictors of the musical piece being played, irrespective of the performer. In fact, our results suggest that only a few consecutive onset deviations are already enough to identify a musical composition with statistically significant accuracy. We consider a mid-size collection of commercial recordings of classical guitar pieces and follow a quantitative approach based on the combination of standard statistical tools and machine learning techniques with the semi-automatic estimation of onset deviations. Besides the reported results, we believe that the considered materials and the methodology followed widen the testing ground for studying musical timing and could open new perspectives in related research fields.

  16. Australian Curriculum Linked Lessons: Statistics

    ERIC Educational Resources Information Center

    Day, Lorraine

    2014-01-01

    Students recognise and analyse data and draw inferences. They represent, summarise and interpret data and undertake purposeful investigations involving the collection and interpretation of data… They develop an increasingly sophisticated ability to critically evaluate chance and data concepts and make reasoned judgments and decisions, as well as…

  17. Linear regression analysis: part 14 of a series on evaluation of scientific publications.

    PubMed

    Schneider, Astrid; Hommel, Gerhard; Blettner, Maria

    2010-11-01

    Regression analysis is an important statistical method for the analysis of medical data. It enables the identification and characterization of relationships among multiple factors. It also enables the identification of prognostically relevant risk factors and the calculation of risk scores for individual prognostication. This article is based on selected textbooks of statistics, a selective review of the literature, and our own experience. After a brief introduction of the uni- and multivariable regression models, illustrative examples are given to explain what the important considerations are before a regression analysis is performed, and how the results should be interpreted. The reader should then be able to judge whether the method has been used correctly and interpret the results appropriately. The performance and interpretation of linear regression analysis are subject to a variety of pitfalls, which are discussed here in detail. The reader is made aware of common errors of interpretation through practical examples. Both the opportunities for applying linear regression analysis and its limitations are presented.

  18. Using the U.S. Geological Survey National Water Quality Laboratory LT-MDL to Evaluate and Analyze Data

    USGS Publications Warehouse

    Bonn, Bernadine A.

    2008-01-01

    A long-term method detection level (LT-MDL) and laboratory reporting level (LRL) are used by the U.S. Geological Survey?s National Water Quality Laboratory (NWQL) when reporting results from most chemical analyses of water samples. Changing to this method provided data users with additional information about their data and often resulted in more reported values in the low concentration range. Before this method was implemented, many of these values would have been censored. The use of the LT-MDL and LRL presents some challenges for the data user. Interpreting data in the low concentration range increases the need for adequate quality assurance because even small contamination or recovery problems can be relatively large compared to concentrations near the LT-MDL and LRL. In addition, the definition of the LT-MDL, as well as the inclusion of low values, can result in complex data sets with multiple censoring levels and reported values that are less than a censoring level. Improper interpretation or statistical manipulation of low-range results in these data sets can result in bias and incorrect conclusions. This document is designed to help data users use and interpret data reported with the LTMDL/ LRL method. The calculation and application of the LT-MDL and LRL are described. This document shows how to extract statistical information from the LT-MDL and LRL and how to use that information in USGS investigations, such as assessing the quality of field data, interpreting field data, and planning data collection for new projects. A set of 19 detailed examples are included in this document to help data users think about their data and properly interpret lowrange data without introducing bias. Although this document is not meant to be a comprehensive resource of statistical methods, several useful methods of analyzing censored data are demonstrated, including Regression on Order Statistics and Kaplan-Meier Estimation. These two statistical methods handle complex censored data sets without resorting to substitution, thereby avoiding a common source of bias and inaccuracy.

  19. Measuring Primary Students' Graph Interpretation Skills Via a Performance Assessment: A case study in instrument development

    NASA Astrophysics Data System (ADS)

    Peterman, Karen; Cranston, Kayla A.; Pryor, Marie; Kermish-Allen, Ruth

    2015-11-01

    This case study was conducted within the context of a place-based education project that was implemented with primary school students in the USA. The authors and participating teachers created a performance assessment of standards-aligned tasks to examine 6-10-year-old students' graph interpretation skills as part of an exploratory research project. Fifty-five students participated in a performance assessment interview at the beginning and end of a place-based investigation. Two forms of the assessment were created and counterbalanced within class at pre and post. In situ scoring was conducted such that responses were scored as correct versus incorrect during the assessment's administration. Criterion validity analysis demonstrated an age-level progression in student scores. Tests of discriminant validity showed that the instrument detected variability in interpretation skills across each of three graph types (line, bar, dot plot). Convergent validity was established by correlating in situ scores with those from the Graph Interpretation Scoring Rubric. Students' proficiency with interpreting different types of graphs matched expectations based on age and the standards-based progression of graphs across primary school grades. The assessment tasks were also effective at detecting pre-post gains in students' interpretation of line graphs and dot plots after the place-based project. The results of the case study are discussed in relation to the common challenges associated with performance assessment. Implications are presented in relation to the need for authentic and performance-based instructional and assessment tasks to respond to the Common Core State Standards and the Next Generation Science Standards.

  20. Simulation of target interpretation based on infrared image features and psychology principle

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Chen, Yu-hua; Gao, Hong-sheng; Wang, Zhan-feng; Wang, Ji-jun; Su, Rong-hua; Huang, Yan-ping

    2009-07-01

    It's an important and complicated process in target interpretation that target features extraction and identification, which effect psychosensorial quantity of interpretation person to target infrared image directly, and decide target viability finally. Using statistical decision theory and psychology principle, designing four psychophysical experiment, the interpretation model of the infrared target is established. The model can get target detection probability by calculating four features similarity degree between target region and background region, which were plotted out on the infrared image. With the verification of a great deal target interpretation in practice, the model can simulate target interpretation and detection process effectively, get the result of target interpretation impersonality, which can provide technique support for target extraction, identification and decision-making.

  1. Agreement Analysis: What He Said, She Said Versus You Said.

    PubMed

    Vetter, Thomas R; Schober, Patrick

    2018-06-01

    Correlation and agreement are 2 concepts that are widely applied in the medical literature and clinical practice to assess for the presence and strength of an association. However, because correlation and agreement are conceptually distinct, they require the use of different statistics. Agreement is a concept that is closely related to but fundamentally different from and often confused with correlation. The idea of agreement refers to the notion of reproducibility of clinical evaluations or biomedical measurements. The intraclass correlation coefficient is a commonly applied measure of agreement for continuous data. The intraclass correlation coefficient can be validly applied specifically to assess intrarater reliability and interrater reliability. As its name implies, the Lin concordance correlation coefficient is another measure of agreement or concordance. In undertaking a comparison of a new measurement technique with an established one, it is necessary to determine whether they agree sufficiently for the new to replace the old. Bland and Altman demonstrated that using a correlation coefficient is not appropriate for assessing the interchangeability of 2 such measurement methods. They in turn described an alternative approach, the since widely applied graphical Bland-Altman Plot, which is based on a simple estimation of the mean and standard deviation of differences between measurements by the 2 methods. In reading a medical journal article that includes the interpretation of diagnostic tests and application of diagnostic criteria, attention is conventionally focused on aspects like sensitivity, specificity, predictive values, and likelihood ratios. However, if the clinicians who interpret the test cannot agree on its interpretation and resulting typically dichotomous or binary diagnosis, the test results will be of little practical use. Such agreement between observers (interobserver agreement) about a dichotomous or binary variable is often reported as the kappa statistic. Assessing the interrater agreement between observers, in the case of ordinal variables and data, also has important biomedical applicability. Typically, this situation calls for use of the Cohen weighted kappa. Questionnaires, psychometric scales, and diagnostic tests are widespread and increasingly used by not only researchers but also clinicians in their daily practice. It is essential that these questionnaires, scales, and diagnostic tests have a high degree of agreement between observers. It is therefore vital that biomedical researchers and clinicians apply the appropriate statistical measures of agreement to assess the reproducibility and quality of these measurement instruments and decision-making processes.

  2. How to Make Nothing Out of Something: Analyses of the Impact of Study Sampling and Statistical Interpretation in Misleading Meta-Analytic Conclusions

    PubMed Central

    Cunningham, Michael R.; Baumeister, Roy F.

    2016-01-01

    The limited resource model states that self-control is governed by a relatively finite set of inner resources on which people draw when exerting willpower. Once self-control resources have been used up or depleted, they are less available for other self-control tasks, leading to a decrement in subsequent self-control success. The depletion effect has been studied for over 20 years, tested or extended in more than 600 studies, and supported in an independent meta-analysis (Hagger et al., 2010). Meta-analyses are supposed to reduce bias in literature reviews. Carter et al.’s (2015) meta-analysis, by contrast, included a series of questionable decisions involving sampling, methods, and data analysis. We provide quantitative analyses of key sampling issues: exclusion of many of the best depletion studies based on idiosyncratic criteria and the emphasis on mini meta-analyses with low statistical power as opposed to the overall depletion effect. We discuss two key methodological issues: failure to code for research quality, and the quantitative impact of weak studies by novice researchers. We discuss two key data analysis issues: questionable interpretation of the results of trim and fill and Funnel Plot Asymmetry test procedures, and the use and misinterpretation of the untested Precision Effect Test and Precision Effect Estimate with Standard Error (PEESE) procedures. Despite these serious problems, the Carter et al. (2015) meta-analysis results actually indicate that there is a real depletion effect – contrary to their title. PMID:27826272

  3. Application of ISO 9000 Standards to Education and Training. Interpretation and Guidelines in a European Perspective. CEDEFOP Document.

    ERIC Educational Resources Information Center

    Van den Berghe, Wouter

    This report brings together European experience on the interpretation and implementation of ISO 9000 in education and training (ET) environments. Chapter 1 discusses the importance of quality concepts in ET and summarizes key concepts of total quality management (TQM) and its relevance for ET. Chapter 2 introduces the ISO 9000 standards. It…

  4. The Interpretation of "in Context" Verbal Probability Expressions Used in International Accounting Standards: A Comparison of English and Chinese Students Studying at English Speaking Universities

    ERIC Educational Resources Information Center

    Salleh, Safrul Izani Mohd; Gardner, John C.; Sulong, Zunaidah; McGowan, Carl B., Jr.

    2011-01-01

    This study examines the differences in the interpretation of ten "in context" verbal probability expressions used in accounting standards between native Chinese speaking and native English speaking accounting students in United Kingdom universities. The study assesses the degree of grouping factors consensus on the numerical…

  5. High-testosterone men reject low ultimatum game offers.

    PubMed

    Burnham, Terence C

    2007-09-22

    The ultimatum game is a simple negotiation with the interesting property that people frequently reject offers of 'free' money. These rejections contradict the standard view of economic rationality. This divergence between economic theory and human behaviour is important and has no broadly accepted cause. This study examines the relationship between ultimatum game rejections and testosterone. In a variety of species, testosterone is associated with male seeking dominance. If low ultimatum game offers are interpreted as challenges, then high-testosterone men may be more likely to reject such offers. In this experiment, men who reject low offers ($5 out of $40) have significantly higher testosterone levels than those who accept. In addition, high testosterone levels are associated with higher ultimatum game offers, but this second finding is not statistically significant.

  6. Diphoton searches (CMS)

    NASA Astrophysics Data System (ADS)

    Quittnat, Milena

    2017-12-01

    The search for high mass resonances decaying into two photons is well motivated by many physics scenarios beyond the standard model. This note summarizes the results of this search in proton-proton collision with a center-of-mass energy of √s = 13 TeV and an integrated luminosity of 3:3 fb-1 by the CMS experiment. It presents the interpretation of the results under spin-0 and spin-2 hypotheses with a relative width up to 5:6 × 10-2 in a mass region of 500 - 4500 GeV. The results of the √s = 13 TeV analysis are combined statistically with previous searches performed by the CMS collaboration, employing a center-of-mass energy of √s = 8 TeV and an integrated luminosity of 19:7 fb-1.

  7. Standard Nutrient Agar 1 as a substitute for blood-supplemented Müller-Hinton agar for antibiograms in developing countries.

    PubMed

    Niederstebruch, N; Sixt, D

    2013-02-01

    In the industrial world, the agar diffusion test is a standard procedure for the susceptibility testing of bacteria isolates. Beta-hemolytic Streptococcus spp. are tested with Müller-Hinton agar supplemented with 5 % blood, a so-called blood agar. The results are interpreted using standardized tables, which only exist for this type of nutrient matrix. Because of a number difficulties, both with respect to technical issues and to manual skills, blood agar is not a feasible option in many developing countries. Beta-hemolytic Streptococcus spp. also grow on Standard Nutrient Agar 1 (StNA1). This suggests using that type of nutrient medium for running agar diffusion tests. However, there are no standardized tables that can be used for interpreting the diameters of the zones of inhibition on StNA1 1. Using the existing standardized tables for blood agar to interpret cultures on StNA1 1 would be of great benefit under such circumstances where blood agar is not available. With this in mind, we conducted comparative tests to evaluate the growth characteristics of beta-hemolytic Streptococcus spp. on StNA1 1 compared to Müller-Hinton agar supplemented with 5 % sheep blood. In this study, we were able to show that beta-hemolytic Streptococcus spp. develop similar zones of inhibition on blood agar and on StNA1 1. Therefore, it is suggested that, for the interpretation of antibiograms of beta-hemolytic Streptococcus spp. performed on StNA1 1, the standard tables for blood agar can be used.

  8. Quantifying economic fluctuations by adapting methods of statistical physics

    NASA Astrophysics Data System (ADS)

    Plerou, Vasiliki

    2001-09-01

    The first focus of this thesis is the investigation of cross-correlations between the price fluctuations of different stocks using the conceptual framework of random matrix theory (RMT), developed in physics to describe the statistical properties of energy-level spectra of complex nuclei. RMT makes predictions for the statistical properties of matrices that are universal, i.e., do not depend on the interactions between the elements comprising the system. In physical systems, deviations from the predictions of RMT provide clues regarding the mechanisms controlling the dynamics of a given system so this framework is of potential value if applied to economic systems. This thesis compares the statistics of cross-correlation matrix C-whose elements Cij are the correlation coefficients of price fluctuations of stock i and j-against the ``null hypothesis'' of a random matrix having the same symmetry properties. It is shown that comparison of the eigenvalue statistics of C with RMT results can be used to distinguish random and non-random parts of C. The non-random part of C which deviates from RMT results, provides information regarding genuine cross-correlations between stocks. The interpretations and potential practical utility of these deviations are also investigated. The second focus is the characterization of the dynamics of stock price fluctuations. The statistical properties of the changes G Δt in price over a time interval Δ t are quantified and the statistical relation between G Δt and the trading activity-measured by the number of transactions NΔ t in the interval Δt is investigated. The statistical properties of the volatility, i.e., the time dependent standard deviation of price fluctuations, is related to two microscopic quantities: NΔt and the variance W2Dt of the price changes for all transactions in the interval Δ t. In addition, the statistical relationship between G Δt and the number of shares QΔt traded in Δ t is investigated.

  9. Statistics Poster Challenge for Schools

    ERIC Educational Resources Information Center

    Payne, Brad; Freeman, Jenny; Stillman, Eleanor

    2013-01-01

    The analysis and interpretation of data are important life skills. A poster challenge for schoolchildren provides an innovative outlet for these skills and demonstrates their relevance to daily life. We discuss our Statistics Poster Challenge and the lessons we have learned.

  10. Noise Reduction in High-Throughput Gene Perturbation Screens

    USDA-ARS?s Scientific Manuscript database

    Motivation: Accurate interpretation of perturbation screens is essential for a successful functional investigation. However, the screened phenotypes are often distorted by noise, and their analysis requires specialized statistical analysis tools. The number and scope of statistical methods available...

  11. Digital recovery, modification, and analysis of Tetra Tech seismic horizon mapping, National Petroleum Reserve Alaska (NPRA), northern Alaska

    USGS Publications Warehouse

    Saltus, R.W.; Kulander, Christopher S.; Potter, Christopher J.

    2002-01-01

    We have digitized, modified, and analyzed seismic interpretation maps of 12 subsurface stratigraphic horizons spanning portions of the National Petroleum Reserve in Alaska (NPRA). These original maps were prepared by Tetra Tech, Inc., based on about 15,000 miles of seismic data collected from 1974 to 1981. We have also digitized interpreted faults and seismic velocities from Tetra Tech maps. The seismic surfaces were digitized as two-way travel time horizons and converted to depth using Tetra Tech seismic velocities. The depth surfaces were then modified by long-wavelength corrections based on recent USGS seismic re-interpretation along regional seismic lines. We have developed and executed an algorithm to identify and calculate statistics on the area, volume, height, and depth of closed structures based on these seismic horizons. These closure statistics are tabulated and have been used as input to oil and gas assessment calculations for the region. Directories accompanying this report contain basic digitized data, processed data, maps, tabulations of closure statistics, and software relating to this project.

  12. O-Acetyl Side-Chains in Monosaccharides: Redundant NMR Spin-Couplings and Statistical Models for Acetate Ester Conformational Analysis.

    PubMed

    Turney, Toby; Pan, Qingfeng; Sernau, Luke; Carmichael, Ian; Zhang, Wenhui; Wang, Xiaocong; Woods, Robert J; Serianni, Anthony S

    2017-01-12

    α- and β-d-glucopyranose monoacetates 1-3 were prepared with selective 13 C enrichment in the O-acetyl side-chain, and ensembles of 13 C- 1 H and 13 C- 13 C NMR spin-couplings (J-couplings) were measured involving the labeled carbons. Density functional theory (DFT) was applied to a set of model structures to determine which J-couplings are sensitive to rotation of the ester bond θ. Eight J-couplings ( 1 J CC , 2 J CH , 2 J CC , 3 J CH , and 3 J CC ) were found to be sensitive to θ, and four equations were parametrized to allow quantitative interpretations of experimental J-values. Inspection of J-coupling ensembles in 1-3 showed that O-acetyl side-chain conformation depends on molecular context, with flanking groups playing a dominant role in determining the properties of θ in solution. To quantify these effects, ensembles of J-couplings containing four values were used to determine the precision and accuracy of several 2-parameter statistical models of rotamer distributions across θ in 1-3. The statistical method used to generate these models has been encoded in a newly developed program, MA'AT, which is available for public use. These models were compared to O-acetyl side-chain behavior observed in a representative sample of crystal structures, and in molecular dynamics (MD) simulations of O-acetylated model structures. While the functional form of the model had little effect on the precision of the calculated mean of θ in 1-3, platykurtic models were found to give more precise estimates of the width of the distribution about the mean (expressed as circular standard deviations). Validation of these 2-parameter models to interpret ensembles of redundant J-couplings using the O-acetyl system as a test case enables future extension of the approach to other flexible elements in saccharides, such as glycosidic linkage conformation.

  13. ADHD Rating Scale-IV: Checklists, Norms, and Clinical Interpretation

    ERIC Educational Resources Information Center

    Pappas, Danielle

    2006-01-01

    This article reviews the "ADHD Rating Scale-IV: Checklist, norms, and clinical interpretation," is a norm-referenced checklist that measures the symptoms of attention deficit/hyperactivity disorder (ADHD) according to the diagnostic criteria of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV; American Psychiatric…

  14. How the Mastery Rubric for Statistical Literacy Can Generate Actionable Evidence about Statistical and Quantitative Learning Outcomes

    ERIC Educational Resources Information Center

    Tractenberg, Rochelle E.

    2017-01-01

    Statistical literacy is essential to an informed citizenry; and two emerging trends highlight a growing need for training that achieves this literacy. The first trend is towards "big" data: while automated analyses can exploit massive amounts of data, the interpretation--and possibly more importantly, the replication--of results are…

  15. Critical Views of 8th Grade Students toward Statistical Data in Newspaper Articles: Analysis in Light of Statistical Literacy

    ERIC Educational Resources Information Center

    Guler, Mustafa; Gursoy, Kadir; Guven, Bulent

    2016-01-01

    Understanding and interpreting biased data, decision-making in accordance with the data, and critically evaluating situations involving data are among the fundamental skills necessary in the modern world. To develop these required skills, emphasis on statistical literacy in school mathematics has been gradually increased in recent years. The…

  16. The Impact of Language Experience on Language and Reading: A Statistical Learning Approach

    ERIC Educational Resources Information Center

    Seidenberg, Mark S.; MacDonald, Maryellen C.

    2018-01-01

    This article reviews the important role of statistical learning for language and reading development. Although statistical learning--the unconscious encoding of patterns in language input--has become widely known as a force in infants' early interpretation of speech, the role of this kind of learning for language and reading comprehension in…

  17. Computational Complexity of Bosons in Linear Networks

    DTIC Science & Technology

    2017-03-01

    photon statistics while strongly reducing emission probabilities: thus leading experimental teams pursuing large-scale BOSONSAMPLING have faced a hard...Potentially, this could motivate new validation protocols exploiting statistics that include this temporal degree of freedom. The impact of...photon- statistics polluted by higher-order terms, which can be mistakenly interpreted as decreased photon-indistinguishability. In fact, in many cases

  18. Working Around Cosmic Variance: Remote Quadrupole Measurements of the CMB

    NASA Astrophysics Data System (ADS)

    Adil, Arsalan; Bunn, Emory

    2018-01-01

    Anisotropies in the CMB maps continue to revolutionize our understanding of the Cosmos. However, the statistical interpretation of these anisotropies is tainted with a posteriori statistics. The problem is particularly emphasized for lower order multipoles, i.e. in the cosmic variance regime of the power spectrum. Naturally, the solution lies in acquiring a new data set – a rather difficult task given the sample size of the Universe.The CMB temperature, in theory, depends on: the direction of photon propagation, the time at which the photons are observed, and the observer’s location in space. In existing CMB data, only the first parameter varies. However, as first pointed out by Kamionkowski and Loeb, a solution lies in making the so-called “Remote Quadrupole Measurements” by analyzing the secondary polarization produced by incoming CMB photons via the Sunyaev-Zel’dovich (SZ) effect. These observations allow us to measure the projected CMB quadrupole at the location and look-back time of a galaxy cluster.At low redshifts, the remote quadrupole is strongly correlated to the CMB anisotropy from our last scattering surface. We provide here a formalism for computing the covariance and relation matrices for both the two-point correlation function on the last scattering surface of a galaxy cluster and the cross correlation of the remote quadrupole with the local CMB. We then calculate these matrices based on a fiducial model and a non-standard model that suppresses power at large angles for ~104 clusters up to z=2. We anticipate to make a priori predictions of the differences between our expectations for the standard and non-standard models. Such an analysis is timely in the wake of the CMB S4 era which will provide us with an extensive SZ cluster catalogue.

  19. Quantitative Seismic Interpretation: Applying Rock Physics Tools to Reduce Interpretation Risk

    NASA Astrophysics Data System (ADS)

    Sondergeld, Carl H.

    This book is divided into seven chapters that cover rock physics, statistical rock physics, seismic inversion techniques, case studies, and work flows. On balance, the emphasis is on rock physics. Included are 56 color figures that greatly help in the interpretation of more complicated plots and displays.The domain of rock physics falls between petrophysics and seismics. It is the basis for interpreting seismic observations and therefore is pivotal to the understanding of this book. The first two chapters are dedicated to this topic (109 pages).

  20. The Role of Discrete Global Grid Systems in the Global Statistical Geospatial Framework

    NASA Astrophysics Data System (ADS)

    Purss, M. B. J.; Peterson, P.; Minchin, S. A.; Bermudez, L. E.

    2016-12-01

    The United Nations Committee of Experts on Global Geospatial Information Management (UN-GGIM) has proposed the development of a Global Statistical Geospatial Framework (GSGF) as a mechanism for the establishment of common analytical systems that enable the integration of statistical and geospatial information. Conventional coordinate reference systems address the globe with a continuous field of points suitable for repeatable navigation and analytical geometry. While this continuous field is represented on a computer in a digitized and discrete fashion by tuples of fixed-precision floating point values, it is a non-trivial exercise to relate point observations spatially referenced in this way to areal coverages on the surface of the Earth. The GSGF states the need to move to gridded data delivery and the importance of using common geographies and geocoding. The challenges associated with meeting these goals are not new and there has been a significant effort within the geospatial community to develop nested gridding standards to tackle these issues over many years. These efforts have recently culminated in the development of a Discrete Global Grid Systems (DGGS) standard which has been developed under the auspices of Open Geospatial Consortium (OGC). DGGS provide a fixed areal based geospatial reference frame for the persistent location of measured Earth observations, feature interpretations, and modelled predictions. DGGS address the entire planet by partitioning it into a discrete hierarchical tessellation of progressively finer resolution cells, which are referenced by a unique index that facilitates rapid computation, query and analysis. The geometry and location of the cell is the principle aspect of a DGGS. Data integration, decomposition, and aggregation is optimised in the DGGS hierarchical structure and can be exploited for efficient multi-source data processing, storage, discovery, transmission, visualization, computation, analysis, and modelling. During the 6th Session of the UN-GGIM in August 2016 the role of DGGS in the context of the GSGF was formally acknowledged. This paper proposes to highlight the synergies and role of DGGS in the Global Statistical Geospatial Framework and to show examples of the use of DGGS to combine geospatial statistics with traditional geoscientific data.

  1. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing

    PubMed Central

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-01-01

    Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393

  2. What is too much variation? The null hypothesis in small-area analysis.

    PubMed Central

    Diehr, P; Cain, K; Connell, F; Volinn, E

    1990-01-01

    A small-area analysis (SAA) in health services research often calculates surgery rates for several small areas, compares the largest rate to the smallest, notes that the difference is large, and attempts to explain this discrepancy as a function of service availability, physician practice styles, or other factors. SAAs are often difficult to interpret because there is little theoretical basis for determining how much variation would be expected under the null hypothesis that all of the small areas have similar underlying surgery rates and that the observed variation is due to chance. We developed a computer program to simulate the distribution of several commonly used descriptive statistics under the null hypothesis, and used it to examine the variability in rates among the counties of the state of Washington. The expected variability when the null hypothesis is true is surprisingly large, and becomes worse for procedures with low incidence, for smaller populations, when there is variability among the populations of the counties, and when readmissions are possible. The characteristics of four descriptive statistics were studied and compared. None was uniformly good, but the chi-square statistic had better performance than the others. When we reanalyzed five journal articles that presented sufficient data, the results were usually statistically significant. Since SAA research today is tending to deal with low-incidence events, smaller populations, and measures where readmissions are possible, more research is needed on the distribution of small-area statistics under the null hypothesis. New standards are proposed for the presentation of SAA results. PMID:2312306

  3. What is too much variation? The null hypothesis in small-area analysis.

    PubMed

    Diehr, P; Cain, K; Connell, F; Volinn, E

    1990-02-01

    A small-area analysis (SAA) in health services research often calculates surgery rates for several small areas, compares the largest rate to the smallest, notes that the difference is large, and attempts to explain this discrepancy as a function of service availability, physician practice styles, or other factors. SAAs are often difficult to interpret because there is little theoretical basis for determining how much variation would be expected under the null hypothesis that all of the small areas have similar underlying surgery rates and that the observed variation is due to chance. We developed a computer program to simulate the distribution of several commonly used descriptive statistics under the null hypothesis, and used it to examine the variability in rates among the counties of the state of Washington. The expected variability when the null hypothesis is true is surprisingly large, and becomes worse for procedures with low incidence, for smaller populations, when there is variability among the populations of the counties, and when readmissions are possible. The characteristics of four descriptive statistics were studied and compared. None was uniformly good, but the chi-square statistic had better performance than the others. When we reanalyzed five journal articles that presented sufficient data, the results were usually statistically significant. Since SAA research today is tending to deal with low-incidence events, smaller populations, and measures where readmissions are possible, more research is needed on the distribution of small-area statistics under the null hypothesis. New standards are proposed for the presentation of SAA results.

  4. Quantification of heterogeneity observed in medical images.

    PubMed

    Brooks, Frank J; Grigsby, Perry W

    2013-03-02

    There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging modalities. In this work, we motivate and derive a statistical measure of image heterogeneity. This statistic measures the distance-dependent average deviation from the smoothest intensity gradation feasible. We show how this statistic may be used to automatically rank images of in vivo human tumors in order of increasing heterogeneity. We test this method against the current practice of ranking images via expert visual inspection. We find that this statistic provides a means of heterogeneity quantification beyond that given by other statistics traditionally used for the same purpose. We demonstrate the effect of tumor shape upon our ranking method and find the method applicable to a wide variety of clinically relevant tumor images. We find that the automated heterogeneity rankings agree very closely with those performed visually by experts. These results indicate that our automated method may be used reliably to rank, in order of increasing heterogeneity, tumor images whether or not object shape is considered to contribute to that heterogeneity. Automated heterogeneity ranking yields objective results which are more consistent than visual rankings. Reducing variability in image interpretation will enable more researchers to better study potential clinical implications of observed tumor heterogeneity.

  5. A review of mammalian carcinogenicity study design and potential effects of alternate test procedures on the safety evaluation of food ingredients.

    PubMed

    Hayes, A W; Dayan, A D; Hall, W C; Kodell, R L; Williams, G M; Waddell, W D; Slesinski, R S; Kruger, C L

    2011-06-01

    Extensive experience in conducting long term cancer bioassays has been gained over the past 50 years of animal testing on drugs, pesticides, industrial chemicals, food additives and consumer products. Testing protocols for the conduct of carcinogenicity studies in rodents have been developed in Guidelines promulgated by regulatory agencies, including the US EPA (Environmental Protection Agency), the US FDA (Food and Drug Administration), the OECD (Organization for Economic Co-operation and Development) for the EU member states and the MAFF (Ministries of Agriculture, Forestries and Fisheries) and MHW (Ministry of Health and Welfare) in Japan. The basis of critical elements of the study design that lead to an accepted identification of the carcinogenic hazard of substances in food and beverages is the focus of this review. The approaches used by entities well-known for carcinogenicity testing and/or guideline development are discussed. Particular focus is placed on comparison of testing programs used by the US National Toxicology Program (NTP) and advocated in OECD guidelines to the testing programs of the European Ramazzini Foundation (ERF), an organization with numerous published carcinogenicity studies. This focus allows for a good comparison of differences in approaches to carcinogenicity testing and allows for a critical consideration of elements important to appropriate carcinogenicity study designs and practices. OECD protocols serve as good standard models for carcinogenicity testing protocol design. Additionally, the detailed design of any protocol should include attention to the rationale for inclusion of particular elements, including the impact of those elements on study interpretations. Appropriate interpretation of study results is dependent on rigorous evaluation of the study design and conduct, including differences from standard practices. Important considerations are differences in the strain of animal used, diet and housing practices, rigorousness of test procedures, dose selection, histopathology procedures, application of historical control data, statistical evaluations and whether statistical extrapolations are supported by, or are beyond the limits of, the data generated. Without due consideration, there can be result conflicting data interpretations and uncertainty about the relevance of a study's results to human risk. This paper discusses the critical elements of rodent (rat) carcinogenicity studies, particularly with respect to the study of food ingredients. It also highlights study practices and procedures that can detract from the appropriate evaluation of human relevance of results, indicating the importance of adherence to international consensus protocols, such as those detailed by OECD. Copyright © 2010. Published by Elsevier Inc.

  6. Rare earth element geochemistry of shallow carbonate outcropping strata in Saudi Arabia: Application for depositional environments prediction

    NASA Astrophysics Data System (ADS)

    Eltom, Hassan A.; Abdullatif, Osman M.; Makkawi, Mohammed H.; Eltoum, Isam-Eldin A.

    2017-03-01

    The interpretation of depositional environments provides important information to understand facies distribution and geometry. The classical approach to interpret depositional environments principally relies on the analysis of lithofacies, biofacies and stratigraphic data, among others. An alternative method, based on geochemical data (chemical element data), is advantageous because it can simply, reproducibly and efficiently interpret and refine the interpretation of the depositional environment of carbonate strata. Here we geochemically analyze and statistically model carbonate samples (n = 156) from seven sections of the Arab-D reservoir outcrop analog of central Saudi Arabia, to determine whether the elemental signatures (major, trace and rare earth elements [REEs]) can be effectively used to predict depositional environments. We find that lithofacies associations of the studied outcrop (peritidal to open marine depositional environments) possess altered REE signatures, and that this trend increases stratigraphically from bottom-to-top, which corresponds to an upward shallowing of depositional environments. The relationship between REEs and major, minor and trace elements indicates that contamination by detrital materials is the principal source of REEs, whereas redox condition, marine and diagenetic processes have minimal impact on the relative distribution of REEs in the lithofacies. In a statistical model (factor analysis and logistic regression), REEs, major and trace elements cluster together and serve as markers to differentiate between peritidal and open marine facies and to differentiate between intertidal and subtidal lithofacies within the peritidal facies. The results indicate that statistical modelling of the elemental composition of carbonate strata can be used as a quantitative method to predict depositional environments and regional paleogeography. The significance of this study lies in offering new assessments of the relationships between lithofacies and geochemical elements by using advanced statistical analysis, a method that could be used elsewhere to interpret depositional environment and refine facies models.

  7. Assessing Variability in Brain Tumor Segmentation to Improve Volumetric Accuracy and Characterization of Change.

    PubMed

    Rios Piedra, Edgar A; Taira, Ricky K; El-Saden, Suzie; Ellingson, Benjamin M; Bui, Alex A T; Hsu, William

    2016-02-01

    Brain tumor analysis is moving towards volumetric assessment of magnetic resonance imaging (MRI), providing a more precise description of disease progression to better inform clinical decision-making and treatment planning. While a multitude of segmentation approaches exist, inherent variability in the results of these algorithms may incorrectly indicate changes in tumor volume. In this work, we present a systematic approach to characterize variability in tumor boundaries that utilizes equivalence tests as a means to determine whether a tumor volume has significantly changed over time. To demonstrate these concepts, 32 MRI studies from 8 patients were segmented using four different approaches (statistical classifier, region-based, edge-based, knowledge-based) to generate different regions of interest representing tumor extent. We showed that across all studies, the average Dice coefficient for the superset of the different methods was 0.754 (95% confidence interval 0.701-0.808) when compared to a reference standard. We illustrate how variability obtained by different segmentations can be used to identify significant changes in tumor volume between sequential time points. Our study demonstrates that variability is an inherent part of interpreting tumor segmentation results and should be considered as part of the interpretation process.

  8. Designing and Interpreting Limiting Dilution Assays: General Principles and Applications to the Latent Reservoir for Human Immunodeficiency Virus-1.

    PubMed

    Rosenbloom, Daniel I S; Elliott, Oliver; Hill, Alison L; Henrich, Timothy J; Siliciano, Janet M; Siliciano, Robert F

    2015-12-01

    Limiting dilution assays are widely used in infectious disease research. These assays are crucial for current human immunodeficiency virus (HIV)-1 cure research in particular. In this study, we offer new tools to help investigators design and analyze dilution assays based on their specific research needs. Limiting dilution assays are commonly used to measure the extent of infection, and in the context of HIV they represent an essential tool for studying latency and potential curative strategies. Yet standard assay designs may not discern whether an intervention reduces an already miniscule latent infection. This review addresses challenges arising in this setting and in the general use of dilution assays. We illustrate the major statistical method for estimating frequency of infectious units from assay results, and we offer an online tool for computing this estimate. We recommend a procedure for customizing assay design to achieve desired sensitivity and precision goals, subject to experimental constraints. We consider experiments in which no viral outgrowth is observed and explain how using alternatives to viral outgrowth may make measurement of HIV latency more efficient. Finally, we discuss how biological complications, such as probabilistic growth of small infections, alter interpretations of experimental results.

  9. HiCRep: assessing the reproducibility of Hi-C data using a stratum-adjusted correlation coefficient.

    PubMed

    Yang, Tao; Zhang, Feipeng; Yardımcı, Galip Gürkan; Song, Fan; Hardison, Ross C; Noble, William Stafford; Yue, Feng; Li, Qunhua

    2017-11-01

    Hi-C is a powerful technology for studying genome-wide chromatin interactions. However, current methods for assessing Hi-C data reproducibility can produce misleading results because they ignore spatial features in Hi-C data, such as domain structure and distance dependence. We present HiCRep, a framework for assessing the reproducibility of Hi-C data that systematically accounts for these features. In particular, we introduce a novel similarity measure, the stratum adjusted correlation coefficient (SCC), for quantifying the similarity between Hi-C interaction matrices. Not only does it provide a statistically sound and reliable evaluation of reproducibility, SCC can also be used to quantify differences between Hi-C contact matrices and to determine the optimal sequencing depth for a desired resolution. The measure consistently shows higher accuracy than existing approaches in distinguishing subtle differences in reproducibility and depicting interrelationships of cell lineages. The proposed measure is straightforward to interpret and easy to compute, making it well-suited for providing standardized, interpretable, automatable, and scalable quality control. The freely available R package HiCRep implements our approach. © 2017 Yang et al.; Published by Cold Spring Harbor Laboratory Press.

  10. Evaluation and interpretation of Thematic Mapper ratios in equations for estimating corn growth parameters

    NASA Technical Reports Server (NTRS)

    Dardner, B. R.; Blad, B. L.; Thompson, D. R.; Henderson, K. E.

    1985-01-01

    Reflectance and agronomic Thematic Mapper (TM) data were analyzed to determine possible data transformations for evaluating several plant parameters of corn. Three transformation forms were used: the ratio of two TM bands, logarithms of two-band ratios, and normalized differences of two bands. Normalized differences and logarithms of two-band ratios responsed similarly in the equations for estimating the plant growth parameters evaluated in this study. Two-term equations were required to obtain the maximum predictability of percent ground cover, canopy moisture content, and total wet phytomass. Standard error of estimate values were 15-26 percent lower for two-term estimates of these parameters than for one-term estimates. The terms log(TM4/TM2) and (TM4/TM5) produced the maximum predictability for leaf area and dry green leaf weight, respectively. The middle infrared bands TM5 and TM7 are essential for maximizing predictability for all measured plant parameters except leaf area index. The estimating models were evaluated over bare soil to discriminate between equations which are statistically similar. Qualitative interpretations of the resulting prediction equations are consistent with general agronomic and remote sensing theory.

  11. 20 CFR 640.3 - Interpretation of Federal law requirements.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 3 2014-04-01 2014-04-01 false Interpretation of Federal law requirements... STANDARD FOR BENEFIT PAYMENT PROMPTNESS-UNEMPLOYMENT COMPENSATION § 640.3 Interpretation of Federal law... require that a State law include provision for such methods of administration as will reasonable insure...

  12. 20 CFR 640.3 - Interpretation of Federal law requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 3 2012-04-01 2012-04-01 false Interpretation of Federal law requirements... STANDARD FOR BENEFIT PAYMENT PROMPTNESS-UNEMPLOYMENT COMPENSATION § 640.3 Interpretation of Federal law... require that a State law include provision for such methods of administration as will reasonable insure...

  13. 20 CFR 640.3 - Interpretation of Federal law requirements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 3 2013-04-01 2013-04-01 false Interpretation of Federal law requirements... STANDARD FOR BENEFIT PAYMENT PROMPTNESS-UNEMPLOYMENT COMPENSATION § 640.3 Interpretation of Federal law... require that a State law include provision for such methods of administration as will reasonable insure...

  14. 20 CFR 640.3 - Interpretation of Federal law requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Interpretation of Federal law requirements... STANDARD FOR BENEFIT PAYMENT PROMPTNESS-UNEMPLOYMENT COMPENSATION § 640.3 Interpretation of Federal law... require that a State law include provision for such methods of administration as will reasonable insure...

  15. Clinical evaluation of automated processing of electrocardiograms by the Veterans Administration program (AVA 3.4).

    PubMed

    Brohet, C R; Richman, H G

    1979-06-01

    Automated processing of electrocardiograms by the Veterans Administration program was evaluated for both agreement with physician interpretation and interpretative accuracy as assessed with nonelectrocardiographic criteria. One thousand unselected electrocardiograms were analyzed by two reviewer groups, one familiar and the other unfamiliar with the computer program. A significant number of measurement errors involving repolarization changes and left axis deviation occurred; however, interpretative disagreements related to statistical decision were largely language-related. Use of a printout with a more traditional format resulted in agreement with physician interpretation by both reviewer groups in more than 80 percent of cases. Overall sensitivity based on agreement with nonelectrocardiographic criteria was significantly greater with use of the computer program than with use of the conventional criteria utilized by the reviewers. This difference was particularly evident in the subgroup analysis of myocardial infarction and left ventricular hypertrophy. The degree of overdiagnosis of left ventricular hypertrophy and posteroinferior infarction was initially unacceptable, but this difficulty was corrected by adjustment of probabilities. Clinical acceptability of the Veterans Administration program appears to require greater physician education than that needed for other computer programs of electrocardiographic analysis; the flexibility of interpretation by statistical decision offers the potential for better diagnostic accuracy.

  16. Assessment of water quality parameters using multivariate analysis for Klang River basin, Malaysia.

    PubMed

    Mohamed, Ibrahim; Othman, Faridah; Ibrahim, Adriana I N; Alaa-Eldin, M E; Yunus, Rossita M

    2015-01-01

    This case study uses several univariate and multivariate statistical techniques to evaluate and interpret a water quality data set obtained from the Klang River basin located within the state of Selangor and the Federal Territory of Kuala Lumpur, Malaysia. The river drains an area of 1,288 km(2), from the steep mountain rainforests of the main Central Range along Peninsular Malaysia to the river mouth in Port Klang, into the Straits of Malacca. Water quality was monitored at 20 stations, nine of which are situated along the main river and 11 along six tributaries. Data was collected from 1997 to 2007 for seven parameters used to evaluate the status of the water quality, namely dissolved oxygen, biochemical oxygen demand, chemical oxygen demand, suspended solids, ammoniacal nitrogen, pH, and temperature. The data were first investigated using descriptive statistical tools, followed by two practical multivariate analyses that reduced the data dimensions for better interpretation. The analyses employed were factor analysis and principal component analysis, which explain 60 and 81.6% of the total variation in the data, respectively. We found that the resulting latent variables from the factor analysis are interpretable and beneficial for describing the water quality in the Klang River. This study presents the usefulness of several statistical methods in evaluating and interpreting water quality data for the purpose of monitoring the effectiveness of water resource management. The results should provide more straightforward data interpretation as well as valuable insight for managers to conceive optimum action plans for controlling pollution in river water.

  17. Interpreting statistics of small lunar craters

    NASA Technical Reports Server (NTRS)

    Schultz, P. H.; Gault, D.; Greeley, R.

    1977-01-01

    Some of the wide variations in the crater-size distributions in lunar photography and in the resulting statistics were interpreted as different degradation rates on different surfaces, different scaling laws in different targets, and a possible population of endogenic craters. These possibilities are reexamined for statistics of 26 different regions. In contrast to most other studies, crater diameters as small as 5 m were measured from enlarged Lunar Orbiter framelets. According to the results of the reported analysis, the different crater distribution types appear to be most consistent with the hypotheses of differential degradation and a superposed crater population. Differential degradation can account for the low level of equilibrium in incompetent materials such as ejecta deposits, mantle deposits, and deep regoliths where scaling law changes and catastrophic processes introduce contradictions with other observations.

  18. On the interpretations of Langevin stochastic equation in different coordinate systems

    NASA Astrophysics Data System (ADS)

    Martínez, E.; López-Díaz, L.; Torres, L.; Alejos, O.

    2004-01-01

    The stochastic Langevin Landau-Lifshitz equation is usually utilized in micromagnetics formalism to account for thermal effects. Commonly, two different interpretations of the stochastic integrals can be made: Ito and Stratonovich. In this work, the Langevin-Landau-Lifshitz (LLL) equation is written in both Cartesian and Spherical coordinates. If Spherical coordinates are employed, the noise is additive, and therefore, Ito and Stratonovich solutions are equal. This is not the case when (LLL) equation is written in Cartesian coordinates. In this case, the Langevin equation must be interpreted in the Stratonovich sense in order to reproduce correct statistical results. Nevertheless, the statistics of the numerical results obtained from Euler-Ito and Euler-Stratonovich schemes are equivalent due to the additional numerical constraint imposed in Cartesian system after each time step, which itself assures that the magnitude of the magnetization is preserved.

  19. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, J.; Jones, G.L.

    1996-01-01

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less

  20. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, J.; Jones, G.L.

    1996-12-31

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less

  1. Analysis and Interpretation of Findings Using Multiple Regression Techniques

    ERIC Educational Resources Information Center

    Hoyt, William T.; Leierer, Stephen; Millington, Michael J.

    2006-01-01

    Multiple regression and correlation (MRC) methods form a flexible family of statistical techniques that can address a wide variety of different types of research questions of interest to rehabilitation professionals. In this article, we review basic concepts and terms, with an emphasis on interpretation of findings relevant to research questions…

  2. Combining data visualization and statistical approaches for interpreting measurements and meta-data: Integrating heatmaps, variable clustering, and mixed regression models

    EPA Science Inventory

    The advent of new higher throughput analytical instrumentation has put a strain on interpreting and explaining the results from complex studies. Contemporary human, environmental, and biomonitoring data sets are comprised of tens or hundreds of analytes, multiple repeat measures...

  3. Developing and Assessing Students' Abilities To Interpret Research.

    ERIC Educational Resources Information Center

    Forsyth, G. Alfred; And Others

    A recent conference on statistics education recommended that more emphasis be placed on the interpretation of research (IOR). Ways for developing and assessing IOR and providing a systematic framework for creating and selecting instructional materials for the independent assessment of specific IOR concepts are the focus of this paper. The…

  4. Effect-Size Measures and Meta-Analytic Thinking in Counseling Psychology Research

    ERIC Educational Resources Information Center

    Henson, Robin K.

    2006-01-01

    Effect sizes are critical to result interpretation and synthesis across studies. Although statistical significance testing has historically dominated the determination of result importance, modern views emphasize the role of effect sizes and confidence intervals. This article accessibly discusses how to calculate and interpret the effect sizes…

  5. Effect Sizes in Gifted Education Research

    ERIC Educational Resources Information Center

    Gentry, Marcia; Peters, Scott J.

    2009-01-01

    Recent calls for reporting and interpreting effect sizes have been numerous, with the 5th edition of the "Publication Manual of the American Psychological Association" (2001) calling for the inclusion of effect sizes to interpret quantitative findings. Many top journals have required that effect sizes accompany claims of statistical significance.…

  6. Using assemblage data in ecological indicators: A comparison and evaluation of commonly available statistical tools

    USGS Publications Warehouse

    Smith, Joseph M.; Mather, Martha E.

    2012-01-01

    Ecological indicators are science-based tools used to assess how human activities have impacted environmental resources. For monitoring and environmental assessment, existing species assemblage data can be used to make these comparisons through time or across sites. An impediment to using assemblage data, however, is that these data are complex and need to be simplified in an ecologically meaningful way. Because multivariate statistics are mathematical relationships, statistical groupings may not make ecological sense and will not have utility as indicators. Our goal was to define a process to select defensible and ecologically interpretable statistical simplifications of assemblage data in which researchers and managers can have confidence. For this, we chose a suite of statistical methods, compared the groupings that resulted from these analyses, identified convergence among groupings, then we interpreted the groupings using species and ecological guilds. When we tested this approach using a statewide stream fish dataset, not all statistical methods worked equally well. For our dataset, logistic regression (Log), detrended correspondence analysis (DCA), cluster analysis (CL), and non-metric multidimensional scaling (NMDS) provided consistent, simplified output. Specifically, the Log, DCA, CL-1, and NMDS-1 groupings were ≥60% similar to each other, overlapped with the fluvial-specialist ecological guild, and contained a common subset of species. Groupings based on number of species (e.g., Log, DCA, CL and NMDS) outperformed groupings based on abundance [e.g., principal components analysis (PCA) and Poisson regression]. Although the specific methods that worked on our test dataset have generality, here we are advocating a process (e.g., identifying convergent groupings with redundant species composition that are ecologically interpretable) rather than the automatic use of any single statistical tool. We summarize this process in step-by-step guidance for the future use of these commonly available ecological and statistical methods in preparing assemblage data for use in ecological indicators.

  7. Kolmogorov-Smirnov statistical test for analysis of ZAP-70 expression in B-CLL, compared with quantitative PCR and IgV(H) mutation status.

    PubMed

    Van Bockstaele, Femke; Janssens, Ann; Piette, Anne; Callewaert, Filip; Pede, Valerie; Offner, Fritz; Verhasselt, Bruno; Philippé, Jan

    2006-07-15

    ZAP-70 has been proposed as a surrogate marker for immunoglobulin heavy-chain variable region (IgV(H)) mutation status, which is known as a prognostic marker in B-cell chronic lymphocytic leukemia (CLL). The flow cytometric analysis of ZAP-70 suffers from difficulties in standardization and interpretation. We applied the Kolmogorov-Smirnov (KS) statistical test to make analysis more straightforward. We examined ZAP-70 expression by flow cytometry in 53 patients with CLL. Analysis was performed as initially described by Crespo et al. (New England J Med 2003; 348:1764-1775) and alternatively by application of the KS statistical test comparing T cells with B cells. Receiver-operating-characteristics (ROC)-curve analyses were performed to determine the optimal cut-off values for ZAP-70 measured by the two approaches. ZAP-70 protein expression was compared with ZAP-70 mRNA expression measured by a quantitative PCR (qPCR) and with the IgV(H) mutation status. Both flow cytometric analyses correlated well with the molecular technique and proved to be of equal value in predicting the IgV(H) mutation status. Applying the KS test is reproducible, simple, straightforward, and overcomes a number of difficulties encountered in the Crespo-method. The KS statistical test is an essential part of the software delivered with modern routine analytical flow cytometers and is well suited for analysis of ZAP-70 expression in CLL. (c) 2006 International Society for Analytical Cytology.

  8. 28 CFR 904.2 - Interpretation of the criminal history record screening requirement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Interpretation of the criminal history... PRIVACY COMPACT COUNCIL STATE CRIMINAL HISTORY RECORD SCREENING STANDARDS § 904.2 Interpretation of the criminal history record screening requirement. Compact Article IV(c) provides that “Any record obtained...

  9. 28 CFR 904.2 - Interpretation of the criminal history record screening requirement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Interpretation of the criminal history... PRIVACY COMPACT COUNCIL STATE CRIMINAL HISTORY RECORD SCREENING STANDARDS § 904.2 Interpretation of the criminal history record screening requirement. Compact Article IV(c) provides that “Any record obtained...

  10. 28 CFR 904.2 - Interpretation of the criminal history record screening requirement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Interpretation of the criminal history... PRIVACY COMPACT COUNCIL STATE CRIMINAL HISTORY RECORD SCREENING STANDARDS § 904.2 Interpretation of the criminal history record screening requirement. Compact Article IV(c) provides that “Any record obtained...

  11. 28 CFR 904.2 - Interpretation of the criminal history record screening requirement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Interpretation of the criminal history... PRIVACY COMPACT COUNCIL STATE CRIMINAL HISTORY RECORD SCREENING STANDARDS § 904.2 Interpretation of the criminal history record screening requirement. Compact Article IV(c) provides that “Any record obtained...

  12. 28 CFR 904.2 - Interpretation of the criminal history record screening requirement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Interpretation of the criminal history... PRIVACY COMPACT COUNCIL STATE CRIMINAL HISTORY RECORD SCREENING STANDARDS § 904.2 Interpretation of the criminal history record screening requirement. Compact Article IV(c) provides that “Any record obtained...

  13. Revisiting Interpretation of Canonical Correlation Analysis: A Tutorial and Demonstration of Canonical Commonality Analysis

    ERIC Educational Resources Information Center

    Nimon, Kim; Henson, Robin K.; Gates, Michael S.

    2010-01-01

    In the face of multicollinearity, researchers face challenges interpreting canonical correlation analysis (CCA) results. Although standardized function and structure coefficients provide insight into the canonical variates produced, they fall short when researchers want to fully report canonical effects. This article revisits the interpretation of…

  14. Middle Grade Students' Interpretations of Contourmaps

    ERIC Educational Resources Information Center

    Carter, Glenda; Cook, Michelle; Park, John C.; Wiebe, Eric N.; Butler, Susan M.

    2008-01-01

    This study examined eighth graders' approach to three tasks implemented to assist students with learning to interpret contour maps. Students' approach to and interpretation of these three tasks were analyzed qualitatively. When students were rank ordered according to their scores on a standardized test of spatial ability, the Minnesota Paper Form…

  15. Good research practices for comparative effectiveness research: defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report--Part I.

    PubMed

    Berger, Marc L; Mamdani, Muhammad; Atkins, David; Johnson, Michael L

    2009-01-01

    Health insurers, physicians, and patients worldwide need information on the comparative effectiveness and safety of prescription drugs in routine care. Nonrandomized studies of treatment effects using secondary databases may supplement the evidence based from randomized clinical trials and prospective observational studies. Recognizing the challenges to conducting valid retrospective epidemiologic and health services research studies, a Task Force was formed to develop a guidance document on state of the art approaches to frame research questions and report findings for these studies. The Task Force was commissioned and a Chair was selected by the International Society for Pharmacoeconomics and Outcomes Research Board of Directors in October 2007. This Report, the first of three reported in this issue of the journal, addressed issues of framing the research question and reporting and interpreting findings. The Task Force Report proposes four primary characteristics-relevance, specificity, novelty, and feasibility while defining the research question. Recommendations included: the practice of a priori specification of the research question; transparency of prespecified analytical plans, provision of justifications for any subsequent changes in analytical plan, and reporting the results of prespecified plans as well as results from significant modifications, structured abstracts to report findings with scientific neutrality; and reasoned interpretations of findings to help inform policy decisions. Comparative effectiveness research in the form of nonrandomized studies using secondary databases can be designed with rigorous elements and conducted with sophisticated statistical methods to improve causal inference of treatment effects. Standardized reporting and careful interpretation of results can aid policy and decision-making.

  16. Multiple imputation of missing fMRI data in whole brain analysis

    PubMed Central

    Vaden, Kenneth I.; Gebregziabher, Mulugeta; Kuchinsky, Stefanie E.; Eckert, Mark A.

    2012-01-01

    Whole brain fMRI analyses rarely include the entire brain because of missing data that result from data acquisition limits and susceptibility artifact, in particular. This missing data problem is typically addressed by omitting voxels from analysis, which may exclude brain regions that are of theoretical interest and increase the potential for Type II error at cortical boundaries or Type I error when spatial thresholds are used to establish significance. Imputation could significantly expand statistical map coverage, increase power, and enhance interpretations of fMRI results. We examined multiple imputation for group level analyses of missing fMRI data using methods that leverage the spatial information in fMRI datasets for both real and simulated data. Available case analysis, neighbor replacement, and regression based imputation approaches were compared in a general linear model framework to determine the extent to which these methods quantitatively (effect size) and qualitatively (spatial coverage) increased the sensitivity of group analyses. In both real and simulated data analysis, multiple imputation provided 1) variance that was most similar to estimates for voxels with no missing data, 2) fewer false positive errors in comparison to mean replacement, and 3) fewer false negative errors in comparison to available case analysis. Compared to the standard analysis approach of omitting voxels with missing data, imputation methods increased brain coverage in this study by 35% (from 33,323 to 45,071 voxels). In addition, multiple imputation increased the size of significant clusters by 58% and number of significant clusters across statistical thresholds, compared to the standard voxel omission approach. While neighbor replacement produced similar results, we recommend multiple imputation because it uses an informed sampling distribution to deal with missing data across subjects that can include neighbor values and other predictors. Multiple imputation is anticipated to be particularly useful for 1) large fMRI data sets with inconsistent missing voxels across subjects and 2) addressing the problem of increased artifact at ultra-high field, which significantly limit the extent of whole brain coverage and interpretations of results. PMID:22500925

  17. Cross Talk: Evaluation of a Curriculum to Teach Medical Students How to Use Telephone Interpreter Services.

    PubMed

    Omoruyi, Emma A; Dunkle, Jesse; Dendy, Colby; McHugh, Erin; Barratt, Michelle S

    2018-03-01

    Telephone interpretation and recent technology advances assist patients with more timely access to rare languages, but no one has examined the role of this technology in the medical setting and how medical students can be prepared for their use. We sought to determine if structured curriculum on interpretation would promote learners self-reported competency in these encounters and if proficiency would be demonstrated in actual patient encounters. Training on the principles of interpreter use with a focus on communication technology was added to medical student education. The students later voluntarily completed a retrospective pre/post training competency self-assessment. A cohort of students rotating at a clinical site had a blinded review of their telephone interpretation encounters scored on a modified validated scale and compared to scored encounters with preintervention learners. Nested ANOVA models were used for audio file analysis. A total of 176 students who completed the training reported a statistically significant improvement in all 4 interpretation competency domains. Eighty-three audio files were analyzed from students before and after intervention. These scored encounters showed no statistical difference between the scores of the 2 groups. However, plotting the mean scores over time from each encounter suggests that those who received the curriculum started their rotation with higher scores and maintained those scores. In an evaluation of learners' ability to use interpreters in actual patient encounters, focused education led to earlier proficiency of using interpreters compared to peers who received no training. Copyright © 2018 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  18. Imprints of non-standard dark energy and dark matter models on the 21cm intensity map power spectrum

    NASA Astrophysics Data System (ADS)

    Carucci, Isabella P.; Corasaniti, Pier-Stefano; Viel, Matteo

    2017-12-01

    We study the imprint of non-standard dark energy (DE) and dark matter (DM) models on the 21cm intensity map power spectra from high-redshift neutral hydrogen (HI) gas. To this purpose we use halo catalogs from N-body simulations of dynamical DE models and DM scenarios which are as successful as the standard Cold Dark Matter model with Cosmological Constant (ΛCDM) at interpreting available cosmological observations. We limit our analysis to halo catalogs at redshift z=1 and 2.3 which are common to all simulations. For each catalog we model the HI distribution by using a simple prescription to associate the HI gas mass to N-body halos. We find that the DE models leave a distinct signature on the HI spectra across a wide range of scales, which correlates with differences in the halo mass function and the onset of the non-linear regime of clustering. In the case of the non-standard DM model significant differences of the HI spectra with respect to the ΛCDM model only arise from the suppressed abundance of low mass halos. These cosmological model dependent features also appear in the 21cm spectra. In particular, we find that future SKA measurements can distinguish the imprints of DE and DM models at high statistical significance.

  19. F-106 data summary and model results relative to threat criteria and protection design analysis

    NASA Technical Reports Server (NTRS)

    Pitts, F. L.; Finelli, G. B.; Perala, R. A.; Rudolph, T. H.

    1986-01-01

    The NASA F-106 has acquired considerable data on the rates-of-change of electromagnetic parameters on the aircraft surface during 690 direct lightning strikes while penetrating thunderstorms at altitudes ranging from 15,000 to 40,000 feet. These in-situ measurements have provided the basis for the first statistical quantification of the lightning electromagnetic threat to aircrat appropriate for determining lightning indirect effects on aircraft. The data are presently being used in updating previous lightning criteria and standards developed over the years from ground-based measurements. The new lightning standards will, therefore, be the first which reflect actual aircraft responses measured at flight altitudes. The modeling technique developed to interpret and understand the direct strike electromagnetic data acquired on the F-106 provides a means to model the interaction of the lightning channel with the F-106. The reasonable results obtained with the model, compared to measured responses, yield confidence that the model may be credibly applied to other aircraft types and uses in the prediction of internal coupling effects in the design of lightning protection for new aircraft.

  20. Applying openEHR's Guideline Definition Language to the SITS international stroke treatment registry: a European retrospective observational study.

    PubMed

    Anani, Nadim; Mazya, Michael V; Chen, Rong; Prazeres Moreira, Tiago; Bill, Olivier; Ahmed, Niaz; Wahlgren, Nils; Koch, Sabine

    2017-01-10

    Interoperability standards intend to standardise health information, clinical practice guidelines intend to standardise care procedures, and patient data registries are vital for monitoring quality of care and for clinical research. This study combines all three: it uses interoperability specifications to model guideline knowledge and applies the result to registry data. We applied the openEHR Guideline Definition Language (GDL) to data from 18,400 European patients in the Safe Implementation of Treatments in Stroke (SITS) registry to retrospectively check their compliance with European recommendations for acute stroke treatment. Comparing compliance rates obtained with GDL to those obtained by conventional statistical data analysis yielded a complete match, suggesting that GDL technology is reliable for guideline compliance checking. The successful application of a standard guideline formalism to a large patient registry dataset is an important step toward widespread implementation of computer-interpretable guidelines in clinical practice and registry-based research. Application of the methodology gave important results on the evolution of stroke care in Europe, important both for quality of care monitoring and clinical research.

  1. Department of Homeland Security (DHS) Proficiency Testing on Small-Scale Safety and Thermal Testing of Improvised Explosives

    NASA Astrophysics Data System (ADS)

    Reynolds, John; Sandstrom, Mary; Brown, Geoffrey; Warner, Kirstin; Phillips, Jason; Shelley, Timothy; Reyes, Jose; Hsu, Peter

    2013-06-01

    One of the first steps in establishing safe handling procedures for explosives is small-scale safety and thermal (SSST) testing. To better understand the response of improvised materials or HMEs to SSST testing, 18 HME materials were compared to 3 standard military explosives in a proficiency-type round robin study among five laboratories--2 DoD and 3 DOE--sponsored by DHS. The testing matrix has been designed to address problems encountered with improvised materials--powder mixtures, liquid suspensions, partially wetted solids, immiscible liquids, and reactive materials. Over 30 issues have been identified that indicate standard test methods may require modification when applied to HMEs to derive accurate sensitivity assessments needed for development safe handling and storage practices. This presentation will discuss experimental difficulties encountered when testing these problematic samples, show inter-laboratory testing results, show some statistical interpretation of the results, and highlight some of the testing issues. Some of the work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-617519 (721812).

  2. 100 Students

    ERIC Educational Resources Information Center

    Riskowski, Jody L.; Olbricht, Gayla; Wilson, Jennifer

    2010-01-01

    Statistics is the art and science of gathering, analyzing, and making conclusions from data. However, many people do not fully understand how to interpret statistical results and conclusions. Placing students in a collaborative environment involving project-based learning may enable them to overcome misconceptions of probability and enhance the…

  3. Statistical characteristics of MST radar echoes and its interpretation

    NASA Technical Reports Server (NTRS)

    Woodman, Ronald F.

    1989-01-01

    Two concepts of fundamental importance are reviewed: the autocorrelation function and the frequency power spectrum. In addition, some turbulence concepts, the relationship between radar signals and atmospheric medium statistics, partial reflection, and the characteristics of noise and clutter interference are discussed.

  4. On the adequacy of identified Cole Cole models

    NASA Astrophysics Data System (ADS)

    Xiang, Jianping; Cheng, Daizhan; Schlindwein, F. S.; Jones, N. B.

    2003-06-01

    The Cole-Cole model has been widely used to interpret electrical geophysical data. Normally an iterative computer program is used to invert the frequency domain complex impedance data and simple error estimation is obtained from the squared difference of the measured (field) and calculated values over the full frequency range. Recently a new direct inversion algorithm was proposed for the 'optimal' estimation of the Cole-Cole parameters, which differs from existing inversion algorithms in that the estimated parameters are direct solutions of a set of equations without the need for an initial guess for initialisation. This paper first briefly investigates the advantages and disadvantages of the new algorithm compared to the standard Levenberg-Marquardt "ridge regression" algorithm. Then, and more importantly, we address the adequacy of the models resulting from both the "ridge regression" and the new algorithm, using two different statistical tests and we give objective statistical criteria for acceptance or rejection of the estimated models. The first is the standard χ2 technique. The second is a parameter-accuracy based test that uses a joint multi-normal distribution. Numerical results that illustrate the performance of both testing methods are given. The main goals of this paper are (i) to provide the source code for the new ''direct inversion'' algorithm in Matlab and (ii) to introduce and demonstrate two methods to determine the reliability of a set of data before data processing, i.e., to consider the adequacy of the resulting Cole-Cole model.

  5. Contrast enhanced dual energy spectral mammogram, an emerging addendum in breast imaging.

    PubMed

    Kariyappa, Kalpana D; Gnanaprakasam, Francis; Anand, Subhapradha; Krishnaswami, Murali; Ramachandran, Madan

    2016-11-01

    To assess the role of contrast-enhanced dual-energy spectral mammogram (CEDM) as a problem-solving tool in equivocal cases. 44 consenting females with equivocal findings on full-field digital mammogram underwent CEDM. All the images were interpreted by two radiologists independently. Confidence of presence was plotted on a three-point Likert scale and probability of cancer was assigned on Breast Imaging Reporting and Data System scoring. Histopathology was taken as the gold standard. Statistical analyses of all variables were performed. 44 breast lesions were included in the study, among which 77.3% lesions were malignant or precancerous and 22.7% lesions were benign or inconclusive. 20% of lesions were identified only on CEDM. True extent of the lesion was made out in 15.9% of cases, multifocality was established in 9.1% of cases and ductal extension was demonstrated in 6.8% of cases. Statistical significance for CEDM was p-value <0.05. Interobserver kappa value was 0.837. CEDM has a useful role in identifying occult lesions in dense breasts and in triaging lesions. In a mammographically visible lesion, CEDM characterizes the lesion, affirms the finding and better demonstrates response to treatment. Hence, we conclude that CEDM is a useful complementary tool to standard mammogram. Advances in knowledge: CEDM can detect and demonstrate lesions even in dense breasts with the advantage of feasibility of stereotactic biopsy in the same setting. Hence, it has the potential to be a screening modality with need for further studies and validation.

  6. When Statistical Literacy Really Matters: Understanding Published Information about the HIV/AIDS Epidemic in South Africa

    ERIC Educational Resources Information Center

    Hobden, Sally

    2014-01-01

    Information on the HIV/AIDS epidemic in Southern Africa is often interpreted through a veil of secrecy and shame and, I argue, with flawed understanding of basic statistics. This research determined the levels of statistical literacy evident in 316 future Mathematical Literacy teachers' explanations of the median in the context of HIV/AIDS…

  7. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    ERIC Educational Resources Information Center

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  8. Improving Quality in Teaching Statistics Concepts Using Modern Visualization: The Design and Use of the Flash Application on Pocket PCs

    ERIC Educational Resources Information Center

    Vaughn, Brandon K.; Wang, Pei-Yu

    2009-01-01

    The emergence of technology has led to numerous changes in mathematical and statistical teaching and learning which has improved the quality of instruction and teacher/student interactions. The teaching of statistics, for example, has shifted from mathematical calculations to higher level cognitive abilities such as reasoning, interpretation, and…

  9. Effects of Matching Multiple Memory Strategies with Computer-Assisted Instruction on Students' Statistics Learning Achievement

    ERIC Educational Resources Information Center

    Liao, Ying; Lin, Wen-He

    2016-01-01

    In the era when digitalization is pursued, numbers are the major medium of information performance and statistics is the primary instrument to interpret and analyze numerical information. For this reason, the cultivation of fundamental statistical literacy should be a key in the learning area of mathematics at the stage of compulsory education.…

  10. Conformity and statistical tolerancing

    NASA Astrophysics Data System (ADS)

    Leblond, Laurent; Pillet, Maurice

    2018-02-01

    Statistical tolerancing was first proposed by Shewhart (Economic Control of Quality of Manufactured Product, (1931) reprinted 1980 by ASQC), in spite of this long history, its use remains moderate. One of the probable reasons for this low utilization is undoubtedly the difficulty for designers to anticipate the risks of this approach. The arithmetic tolerance (worst case) allows a simple interpretation: conformity is defined by the presence of the characteristic in an interval. Statistical tolerancing is more complex in its definition. An interval is not sufficient to define the conformance. To justify the statistical tolerancing formula used by designers, a tolerance interval should be interpreted as the interval where most of the parts produced should probably be located. This tolerance is justified by considering a conformity criterion of the parts guaranteeing low offsets on the latter characteristics. Unlike traditional arithmetic tolerancing, statistical tolerancing requires a sustained exchange of information between design and manufacture to be used safely. This paper proposes a formal definition of the conformity, which we apply successively to the quadratic and arithmetic tolerancing. We introduce a concept of concavity, which helps us to demonstrate the link between tolerancing approach and conformity. We use this concept to demonstrate the various acceptable propositions of statistical tolerancing (in the space decentring, dispersion).

  11. Decoding communities in networks

    NASA Astrophysics Data System (ADS)

    Radicchi, Filippo

    2018-02-01

    According to a recent information-theoretical proposal, the problem of defining and identifying communities in networks can be interpreted as a classical communication task over a noisy channel: memberships of nodes are information bits erased by the channel, edges and nonedges in the network are parity bits introduced by the encoder but degraded through the channel, and a community identification algorithm is a decoder. The interpretation is perfectly equivalent to the one at the basis of well-known statistical inference algorithms for community detection. The only difference in the interpretation is that a noisy channel replaces a stochastic network model. However, the different perspective gives the opportunity to take advantage of the rich set of tools of coding theory to generate novel insights on the problem of community detection. In this paper, we illustrate two main applications of standard coding-theoretical methods to community detection. First, we leverage a state-of-the-art decoding technique to generate a family of quasioptimal community detection algorithms. Second and more important, we show that the Shannon's noisy-channel coding theorem can be invoked to establish a lower bound, here named as decodability bound, for the maximum amount of noise tolerable by an ideal decoder to achieve perfect detection of communities. When computed for well-established synthetic benchmarks, the decodability bound explains accurately the performance achieved by the best community detection algorithms existing on the market, telling us that only little room for their improvement is still potentially left.

  12. Assessment of the Incremental Benefit of Computer-Aided Detection (CAD) for Interpretation of CT Colonography by Experienced and Inexperienced Readers

    PubMed Central

    Boone, Darren; Mallett, Susan; McQuillan, Justine; Taylor, Stuart A.; Altman, Douglas G.; Halligan, Steve

    2015-01-01

    Objectives To quantify the incremental benefit of computer-assisted-detection (CAD) for polyps, for inexperienced readers versus experienced readers of CT colonography. Methods 10 inexperienced and 16 experienced radiologists interpreted 102 colonography studies unassisted and with CAD utilised in a concurrent paradigm. They indicated any polyps detected on a study sheet. Readers’ interpretations were compared against a ground-truth reference standard: 46 studies were normal and 56 had at least one polyp (132 polyps in total). The primary study outcome was the difference in CAD net benefit (a combination of change in sensitivity and change in specificity with CAD, weighted towards sensitivity) for detection of patients with polyps. Results Inexperienced readers’ per-patient sensitivity rose from 39.1% to 53.2% with CAD and specificity fell from 94.1% to 88.0%, both statistically significant. Experienced readers’ sensitivity rose from 57.5% to 62.1% and specificity fell from 91.0% to 88.3%, both non-significant. Net benefit with CAD assistance was significant for inexperienced readers but not for experienced readers: 11.2% (95%CI 3.1% to 18.9%) versus 3.2% (95%CI -1.9% to 8.3%) respectively. Conclusions Concurrent CAD resulted in a significant net benefit when used by inexperienced readers to identify patients with polyps by CT colonography. The net benefit was nearly four times the magnitude of that observed for experienced readers. Experienced readers did not benefit significantly from concurrent CAD. PMID:26355745

  13. Decoding communities in networks.

    PubMed

    Radicchi, Filippo

    2018-02-01

    According to a recent information-theoretical proposal, the problem of defining and identifying communities in networks can be interpreted as a classical communication task over a noisy channel: memberships of nodes are information bits erased by the channel, edges and nonedges in the network are parity bits introduced by the encoder but degraded through the channel, and a community identification algorithm is a decoder. The interpretation is perfectly equivalent to the one at the basis of well-known statistical inference algorithms for community detection. The only difference in the interpretation is that a noisy channel replaces a stochastic network model. However, the different perspective gives the opportunity to take advantage of the rich set of tools of coding theory to generate novel insights on the problem of community detection. In this paper, we illustrate two main applications of standard coding-theoretical methods to community detection. First, we leverage a state-of-the-art decoding technique to generate a family of quasioptimal community detection algorithms. Second and more important, we show that the Shannon's noisy-channel coding theorem can be invoked to establish a lower bound, here named as decodability bound, for the maximum amount of noise tolerable by an ideal decoder to achieve perfect detection of communities. When computed for well-established synthetic benchmarks, the decodability bound explains accurately the performance achieved by the best community detection algorithms existing on the market, telling us that only little room for their improvement is still potentially left.

  14. Impact of Asynchronous Training on Radiology Learning Curve among Emergency Medicine Residents and Clerkship Students

    PubMed Central

    Pourmand, Ali; Woodward, Christina; Shokoohi, Hamid; King, Jordan B; Taheri, M Reza; King, Jackson; Lawrence, Christopher

    2018-01-01

    Context Web-based learning (WBL) modules are effectively used to improve medical education curriculum; however, they have not been evaluated to improve head computed tomography (CT) scan interpretation in an emergency medicine (EM) setting. Objective To evaluate the effectiveness of a WBL module to aid identification of cranial structures on CT and to improve ability to distinguish between normal and abnormal findings. Design Prospective, before-and-after trial in the Emergency Department of an academic center. Baseline head CT knowledge was assessed via a standardized test containing ten head CT scans, including normal scans and those showing hemorrhagic stroke, trauma, and infection (abscess). All trainees then participated in a WBL intervention. Three weeks later, they were given the same ten CT scans to evaluate in a standardized posttest. Main Outcome Measures Improvement in test scores. Results A total of 131 EM clerkship students and 32 EM residents were enrolled. Pretest scores correlated with stage of training, with students and first-year residents demonstrating the lowest scores. Overall, there was a significant improvement in percentage of correctly classified CT images after the training intervention from a mean pretest score of 32% ± 12% to posttest score of 67% ± 13% (mean improvement = 35% ± 13%, p < 0.001). Among subsets by training level, all subgroups except first-year residents demonstrated a statistically significant increase in scores after the training. Conclusion Incorporating asynchronous WBL modules into EM clerkship and residency curriculum provides early radiographic exposure in their clinical training and can enhance diagnostic head CT scan interpretation. PMID:29272248

  15. PlanetPack: A radial-velocity time-series analysis tool facilitating exoplanets detection, characterization, and dynamical simulations

    NASA Astrophysics Data System (ADS)

    Baluev, Roman V.

    2013-08-01

    We present PlanetPack, a new software tool that we developed to facilitate and standardize the advanced analysis of radial velocity (RV) data for the goal of exoplanets detection, characterization, and basic dynamical N-body simulations. PlanetPack is a command-line interpreter, that can run either in an interactive mode or in a batch mode of automatic script interpretation. Its major abilities include: (i) advanced RV curve fitting with the proper maximum-likelihood treatment of unknown RV jitter; (ii) user-friendly multi-Keplerian as well as Newtonian N-body RV fits; (iii) use of more efficient maximum-likelihood periodograms that involve the full multi-planet fitting (sometimes called as “residual” or “recursive” periodograms); (iv) easily calculatable parametric 2D likelihood function level contours, reflecting the asymptotic confidence regions; (v) fitting under some useful functional constraints is user-friendly; (vi) basic tasks of short- and long-term planetary dynamical simulation using a fast Everhart-type integrator based on Gauss-Legendre spacings; (vii) fitting the data with red noise (auto-correlated errors); (viii) various analytical and numerical methods for the tasks of determining the statistical significance. It is planned that further functionality may be added to PlanetPack in the future. During the development of this software, a lot of effort was made to improve the calculational speed, especially for CPU-demanding tasks. PlanetPack was written in pure C++ (standard of 1998/2003), and is expected to be compilable and useable on a wide range of platforms.

  16. Statistical significance versus clinical relevance.

    PubMed

    van Rijn, Marieke H C; Bech, Anneke; Bouyer, Jean; van den Brand, Jan A J G

    2017-04-01

    In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P < 0.05 means that the null hypothesis is false, and P ≥0.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  17. Oral and maxillofacial surgery residents have poor understanding of biostatistics.

    PubMed

    Best, Al M; Laskin, Daniel M

    2013-01-01

    The purpose of this study was to evaluate residents' understanding of biostatistics and interpretation of research results. A questionnaire previously used in internal medicine residents was modified to include oral and maxillofacial surgery (OMS) examples. The survey included sections to identify demographic and educational characteristics of residents, attitudes and confidence, and the primary outcome-knowledge of biostatistics. In 2009 an invitation to the Internet survey was sent to all 106 program directors in the United States, who were requested to forward it to their residents. One hundred twelve residents responded. The percentage of residents who had taken a course in epidemiology was 53%; biostatistics, 49%; and evidence-based dentistry, 65%. Conversely, 10% of OMS residents had taken none of these classes. Across the 6-item test of knowledge of statistical methods, the mean percentage of correct answers was 38% (SD, 22%). Nearly half of the residents (42%) could not correctly identify continuous, ordinal, or nominal variables. Only 21% correctly identified a case-control study, but 79% correctly identified that the purpose of blinding was to reduce bias. Only 46% correctly interpreted a clinically unimportant and statistically nonsignificant result. None of the demographic or experience factors of OMS residents were related to statistical knowledge. Overall, OMS resident knowledge was below that of internal medicine residents (P<.0001). However, OMS residents were overconfident in their claim to understand most statistical terms. OMS residents lack knowledge in biostatistics and the interpretation of research and are thus unprepared to interpret the results of published clinical research. Residency programs should include effective biostatistical training in their curricula to prepare residents in evidence-based dentistry. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  18. International recommendations for electrocardiographic interpretation in athletes.

    PubMed

    Sharma, Sanjay; Drezner, Jonathan A; Baggish, Aaron; Papadakis, Michael; Wilson, Mathew G; Prutkin, Jordan M; La Gerche, Andre; Ackerman, Michael J; Borjesson, Mats; Salerno, Jack C; Asif, Irfan M; Owens, David S; Chung, Eugene H; Emery, Michael S; Froelicher, Victor F; Heidbuchel, Hein; Adamuz, Carmen; Asplund, Chad A; Cohen, Gordon; Harmon, Kimberly G; Marek, Joseph C; Molossi, Silvana; Niebauer, Josef; Pelto, Hank F; Perez, Marco V; Riding, Nathan R; Saarel, Tess; Schmied, Christian M; Shipon, David M; Stein, Ricardo; Vetter, Victoria L; Pelliccia, Antonio; Corrado, Domenico

    2018-04-21

    Sudden cardiac death (SCD) is the leading cause of mortality in athletes during sport. A variety of mostly hereditary, structural, or electrical cardiac disorders are associated with SCD in young athletes, the majority of which can be identified or suggested by abnormalities on a resting 12-lead electrocardiogram (ECG). Whether used for diagnostic or screening purposes, physicians responsible for the cardiovascular care of athletes should be knowledgeable and competent in ECG interpretation in athletes. However, in most countries a shortage of physician expertise limits wider application of the ECG in the care of the athlete. A critical need exists for physician education in modern ECG interpretation that distinguishes normal physiological adaptations in athletes from distinctly abnormal findings suggestive of underlying pathology. Since the original 2010 European Society of Cardiology recommendations for ECG interpretation in athletes, ECG standards have evolved quickly over the last decade; pushed by a growing body of scientific data that both tests proposed criteria sets and establishes new evidence to guide refinements. On 26-27 February 2015, an international group of experts in sports cardiology, inherited cardiac disease, and sports medicine convened in Seattle, Washington, to update contemporary standards for ECG interpretation in athletes. The objective of the meeting was to define and revise ECG interpretation standards based on new and emerging research and to develop a clear guide to the proper evaluation of ECG abnormalities in athletes. This statement represents an international consensus for ECG interpretation in athletes and provides expert opinion-based recommendations linking specific ECG abnormalities and the secondary evaluation for conditions associated with SCD.

  19. Fish: A New Computer Program for Friendly Introductory Statistics Help

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Raffle, Holly

    2005-01-01

    All introductory statistics students must master certain basic descriptive statistics, including means, standard deviations and correlations. Students must also gain insight into such complex concepts as the central limit theorem and standard error. This article introduces and describes the Friendly Introductory Statistics Help (FISH) computer…

  20. Statistical Interpretation of the Local Field Inside Dielectrics.

    ERIC Educational Resources Information Center

    Berrera, Ruben G.; Mello, P. A.

    1982-01-01

    Compares several derivations of the Clausius-Mossotti relation to analyze consistently the nature of approximations used and their range of applicability. Also presents a statistical-mechanical calculation of the local field for classical system of harmonic oscillators interacting via the Coulomb potential. (Author/SK)

  1. Counterbalancing and Other Uses of Repeated-Measures Latin-Square Designs: Analyses and Interpretations.

    ERIC Educational Resources Information Center

    Reese, Hayne W.

    1997-01-01

    Recommends that when repeated-measures Latin-square designs are used to counterbalance treatments across a procedural variable or to reduce the number of treatment combinations given to each participant, effects be analyzed statistically, and that in all uses, researchers consider alternative interpretations of the variance associated with the…

  2. Novice Interpretations of Progress Monitoring Graphs: Extreme Values and Graphical Aids

    ERIC Educational Resources Information Center

    Newell, Kirsten W.; Christ, Theodore J.

    2017-01-01

    Curriculum-Based Measurement of Reading (CBM-R) is frequently used to monitor instructional effects and evaluate response to instruction. Educators often view the data graphically on a time-series graph that might include a variety of statistical and visual aids, which are intended to facilitate the interpretation. This study evaluated the effects…

  3. Leaping from Discrete to Continuous Independent Variables: Sixth Graders' Science Line Graph Interpretations

    ERIC Educational Resources Information Center

    Boote, Stacy K.; Boote, David N.

    2017-01-01

    Students often struggle to interpret graphs correctly, despite emphasis on graphic literacy in U.S. education standards documents. The purpose of this study was to describe challenges sixth graders with varying levels of science and mathematics achievement encounter when transitioning from interpreting graphs having discrete independent variables…

  4. Useful Effect Size Interpretations for Single Case Research

    ERIC Educational Resources Information Center

    Parker, Richard I.; Hagan-Burke, Shanna

    2007-01-01

    An obstacle to broader acceptability of effect sizes in single case research is their lack of intuitive and useful interpretations. Interpreting Cohen's d as "standard deviation units difference" and R[superscript 2] as "percent of variance accounted for" do not resound with most visual analysts. In fact, the only comparative analysis widely…

  5. Training Translators and Conference Interpreters. Language in Education: Theory and Practice, No. 58.

    ERIC Educational Resources Information Center

    Weber, Wilhelm K.

    An examination of translation and conference interpretation as well-established academic professions focuses on how they should be taught in order to maintain the integrity of the two professions and the highest standards in their exercise. An introductory section answers the question, "Can translation and interpretation be taught?,"…

  6. A Tutorial in Bayesian Potential Outcomes Mediation Analysis.

    PubMed

    Miočević, Milica; Gonzalez, Oscar; Valente, Matthew J; MacKinnon, David P

    2018-01-01

    Statistical mediation analysis is used to investigate intermediate variables in the relation between independent and dependent variables. Causal interpretation of mediation analyses is challenging because randomization of subjects to levels of the independent variable does not rule out the possibility of unmeasured confounders of the mediator to outcome relation. Furthermore, commonly used frequentist methods for mediation analysis compute the probability of the data given the null hypothesis, which is not the probability of a hypothesis given the data as in Bayesian analysis. Under certain assumptions, applying the potential outcomes framework to mediation analysis allows for the computation of causal effects, and statistical mediation in the Bayesian framework gives indirect effects probabilistic interpretations. This tutorial combines causal inference and Bayesian methods for mediation analysis so the indirect and direct effects have both causal and probabilistic interpretations. Steps in Bayesian causal mediation analysis are shown in the application to an empirical example.

  7. Alternative Test Methods for Electronic Parts

    NASA Technical Reports Server (NTRS)

    Plante, Jeannette

    2004-01-01

    It is common practice within NASA to test electronic parts at the manufacturing lot level to demonstrate, statistically, that parts from the lot tested will not fail in service using generic application conditions. The test methods and the generic application conditions used have been developed over the years through cooperation between NASA, DoD, and industry in order to establish a common set of standard practices. These common practices, found in MIL-STD-883, MIL-STD-750, military part specifications, EEE-INST-002, and other guidelines are preferred because they are considered to be effective and repeatable and their results are usually straightforward to interpret. These practices can sometimes be unavailable to some NASA projects due to special application conditions that must be addressed, such as schedule constraints, cost constraints, logistical constraints, or advances in the technology that make the historical standards an inappropriate choice for establishing part performance and reliability. Alternate methods have begun to emerge and to be used by NASA programs to test parts individually or as part of a system, especially when standard lot tests cannot be applied. Four alternate screening methods will be discussed in this paper: Highly accelerated life test (HALT), forward voltage drop tests for evaluating wire-bond integrity, burn-in options during or after highly accelerated stress test (HAST), and board-level qualification.

  8. Validating indicators of treatment response: application to trichotillomania.

    PubMed

    Nelson, Samuel O; Rogers, Kate; Rusch, Natalie; McDonough, Lauren; Malloy, Elizabeth J; Falkenstein, Martha J; Banis, Maria; Haaga, David A F

    2014-09-01

    Different studies of the treatment of trichotillomania (TTM) have used varying standards to determine the proportion of patients who obtain clinically meaningful benefits, but there is little information on the similarity of results yielded by these methods or on their comparative validity. Data from a stepped-care (Step 1: Web-based self-help; Step 2: Individual behavior therapy; N = 60) treatment study of TTM were used to evaluate 7 potential standards: complete abstinence, ≥ 25% symptom reduction, recovery of normal functioning, and clinical significance (recovery + statistically reliable change), each of the last 3 being measured by self-report (Massachusetts General Hospital Hairpulling Scale; MGH-HPS) or interview (Psychiatric Institute Trichotillomania Scale). Depending on the metric, response rates ranged from 25 to 68%. All standards were significantly associated with one another, though less strongly for the 25% symptom reduction metrics. Concurrent (with deciding to enter Step 2 treatment) and predictive (with 3-month follow-up treatment satisfaction, TTM-related impairment, quality of life, and diagnosis) validity results were variable but generally strongest for clinical significance as measured via self-report. Routine reporting of the proportion of patients who make clinically significant improvement on the MGH-HPS, supplemented by data on complete abstinence, would bolster the interpretability of TTM treatment outcome findings. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  9. An entropy-based statistic for genomewide association studies.

    PubMed

    Zhao, Jinying; Boerwinkle, Eric; Xiong, Momiao

    2005-07-01

    Efficient genotyping methods and the availability of a large collection of single-nucleotide polymorphisms provide valuable tools for genetic studies of human disease. The standard chi2 statistic for case-control studies, which uses a linear function of allele frequencies, has limited power when the number of marker loci is large. We introduce a novel test statistic for genetic association studies that uses Shannon entropy and a nonlinear function of allele frequencies to amplify the differences in allele and haplotype frequencies to maintain statistical power with large numbers of marker loci. We investigate the relationship between the entropy-based test statistic and the standard chi2 statistic and show that, in most cases, the power of the entropy-based statistic is greater than that of the standard chi2 statistic. The distribution of the entropy-based statistic and the type I error rates are validated using simulation studies. Finally, we apply the new entropy-based test statistic to two real data sets, one for the COMT gene and schizophrenia and one for the MMP-2 gene and esophageal carcinoma, to evaluate the performance of the new method for genetic association studies. The results show that the entropy-based statistic obtained smaller P values than did the standard chi2 statistic.

  10. 77 FR 34044 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-08

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS); Subcommittee on Standards. Time and Date: June 20, 2012, 9 a.m.-5 p.m. EST..., Executive Secretary, NCVHS, National Center for Health Statistics, Centers for Disease Control and...

  11. Electrocardiographic interpretation skills of cardiology residents: are they competent?

    PubMed

    Sibbald, Matthew; Davies, Edward G; Dorian, Paul; Yu, Eric H C

    2014-12-01

    Achieving competency at electrocardiogram (ECG) interpretation among cardiology subspecialty residents has traditionally focused on interpreting a target number of ECGs during training. However, there is little evidence to support this approach. Further, there are no data documenting the competency of ECG interpretation skills among cardiology residents, who become de facto the gold standard in their practice communities. We tested 29 Cardiology residents from all 3 years in a large training program using a set of 20 ECGs collected from a community cardiology practice over a 1-month period. Residents interpreted half of the ECGs using a standard analytic framework, and half using their own approach. Residents were scored on the number of correct and incorrect diagnoses listed. Overall diagnostic accuracy was 58%. Of 6 potentially life-threatening diagnoses, residents missed 36% (123 of 348) including hyperkalemia (81%), long QT (52%), complete heart block (35%), and ventricular tachycardia (19%). Residents provided additional inappropriate diagnoses on 238 ECGs (41%). Diagnostic accuracy was similar between ECGs interpreted using an analytic framework vs ECGs interpreted without an analytic framework (59% vs 58%; F(1,1333) = 0.26; P = 0.61). Cardiology resident proficiency at ECG interpretation is suboptimal. Despite the use of an analytic framework, there remain significant deficiencies in ECG interpretation among Cardiology residents. A more systematic method of addressing these important learning gaps is urgently needed. Copyright © 2014 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  12. Understanding sexual orientation and health in Canada: Who are we capturing and who are we missing using the Statistics Canada sexual orientation question?

    PubMed

    Dharma, Christoffer; Bauer, Greta R

    2017-04-20

    Public health research on inequalities in Canada depends heavily on population data sets such as the Canadian Community Health Survey. While sexual orientation has three dimensions - identity, behaviour and attraction - Statistics Canada and public health agencies assess sexual orientation with a single questionnaire item on identity, defined behaviourally. This study aims to evaluate this item, to allow for clearer interpretation of sexual orientation frequencies and inequalities. Through an online convenience sampling of Canadians ≥14 years of age, participants (n = 311) completed the Statistics Canada question and a second set of sexual orientation questions. The single-item question had an 85.8% sensitivity in capturing sexual minorities, broadly defined by their sexual identity, lifetime behaviour and attraction. Kappa statistic for agreement between the single item and sexual identity was 0.89; with past year, lifetime behaviour and attraction were 0.39, 0.48 and 0.57 respectively. The item captured 99.3% of those with a sexual minority identity, 84.2% of those with any lifetime same-sex partners, 98.4% with a past-year same-sex partner, and 97.8% who indicated at least equal attraction to same-sex persons. Findings from Statistics Canada surveys can be best interpreted as applying to those who identify as sexual minorities. Analyses using this measure will underidentify those with same-sex partners or attractions who do not identify as a sexual minority, and should be interpreted accordingly. To understand patterns of sexual minority health in Canada, there is a need to incorporate other dimensions of sexual orientation.

  13. Hierarchical Dirichlet process model for gene expression clustering

    PubMed Central

    2013-01-01

    Clustering is an important data processing tool for interpreting microarray data and genomic network inference. In this article, we propose a clustering algorithm based on the hierarchical Dirichlet processes (HDP). The HDP clustering introduces a hierarchical structure in the statistical model which captures the hierarchical features prevalent in biological data such as the gene express data. We develop a Gibbs sampling algorithm based on the Chinese restaurant metaphor for the HDP clustering. We apply the proposed HDP algorithm to both regulatory network segmentation and gene expression clustering. The HDP algorithm is shown to outperform several popular clustering algorithms by revealing the underlying hierarchical structure of the data. For the yeast cell cycle data, we compare the HDP result to the standard result and show that the HDP algorithm provides more information and reduces the unnecessary clustering fragments. PMID:23587447

  14. Application of machine learning and expert systems to Statistical Process Control (SPC) chart interpretation

    NASA Technical Reports Server (NTRS)

    Shewhart, Mark

    1991-01-01

    Statistical Process Control (SPC) charts are one of several tools used in quality control. Other tools include flow charts, histograms, cause and effect diagrams, check sheets, Pareto diagrams, graphs, and scatter diagrams. A control chart is simply a graph which indicates process variation over time. The purpose of drawing a control chart is to detect any changes in the process signalled by abnormal points or patterns on the graph. The Artificial Intelligence Support Center (AISC) of the Acquisition Logistics Division has developed a hybrid machine learning expert system prototype which automates the process of constructing and interpreting control charts.

  15. Neuronal Correlation Parameter and the Idea of Thermodynamic Entropy of an N-Body Gravitationally Bounded System.

    PubMed

    Haranas, Ioannis; Gkigkitzis, Ioannis; Kotsireas, Ilias; Austerlitz, Carlos

    2017-01-01

    Understanding how the brain encodes information and performs computation requires statistical and functional analysis. Given the complexity of the human brain, simple methods that facilitate the interpretation of statistical correlations among different brain regions can be very useful. In this report we introduce a numerical correlation measure that may serve the interpretation of correlational neuronal data, and may assist in the evaluation of different brain states. The description of the dynamical brain system, through a global numerical measure may indicate the presence of an action principle which may facilitate a application of physics principles in the study of the human brain and cognition.

  16. Dynamical interpretation of conditional patterns

    NASA Technical Reports Server (NTRS)

    Adrian, R. J.; Moser, R. D.; Moin, P.

    1988-01-01

    While great progress is being made in characterizing the 3-D structure of organized turbulent motions using conditional averaging analysis, there is a lack of theoretical guidance regarding the interpretation and utilization of such information. Questions concerning the significance of the structures, their contributions to various transport properties, and their dynamics cannot be answered without recourse to appropriate dynamical governing equations. One approach which addresses some of these questions uses the conditional fields as initial conditions and calculates their evolution from the Navier-Stokes equations, yielding valuable information about stability, growth, and longevity of the mean structure. To interpret statistical aspects of the structures, a different type of theory which deals with the structures in the context of their contributions to the statistics of the flow is needed. As a first step toward this end, an effort was made to integrate the structural information from the study of organized structures with a suitable statistical theory. This is done by stochastically estimating the two-point conditional averages that appear in the equation for the one-point probability density function, and relating the structures to the conditional stresses. Salient features of the estimates are identified, and the structure of the one-point estimates in channel flow is defined.

  17. Photospheric Magnetic Field Properties of Flaring versus Flare-quiet Active Regions. II. Discriminant Analysis

    NASA Astrophysics Data System (ADS)

    Leka, K. D.; Barnes, G.

    2003-10-01

    We apply statistical tests based on discriminant analysis to the wide range of photospheric magnetic parameters described in a companion paper by Leka & Barnes, with the goal of identifying those properties that are important for the production of energetic events such as solar flares. The photospheric vector magnetic field data from the University of Hawai'i Imaging Vector Magnetograph are well sampled both temporally and spatially, and we include here data covering 24 flare-event and flare-quiet epochs taken from seven active regions. The mean value and rate of change of each magnetic parameter are treated as separate variables, thus evaluating both the parameter's state and its evolution, to determine which properties are associated with flaring. Considering single variables first, Hotelling's T2-tests show small statistical differences between flare-producing and flare-quiet epochs. Even pairs of variables considered simultaneously, which do show a statistical difference for a number of properties, have high error rates, implying a large degree of overlap of the samples. To better distinguish between flare-producing and flare-quiet populations, larger numbers of variables are simultaneously considered; lower error rates result, but no unique combination of variables is clearly the best discriminator. The sample size is too small to directly compare the predictive power of large numbers of variables simultaneously. Instead, we rank all possible four-variable permutations based on Hotelling's T2-test and look for the most frequently appearing variables in the best permutations, with the interpretation that they are most likely to be associated with flaring. These variables include an increasing kurtosis of the twist parameter and a larger standard deviation of the twist parameter, but a smaller standard deviation of the distribution of the horizontal shear angle and a horizontal field that has a smaller standard deviation but a larger kurtosis. To support the ``sorting all permutations'' method of selecting the most frequently occurring variables, we show that the results of a single 10-variable discriminant analysis are consistent with the ranking. We demonstrate that individually, the variables considered here have little ability to differentiate between flaring and flare-quiet populations, but with multivariable combinations, the populations may be distinguished.

  18. Statistical Approaches to Interpretation of Local, Regional, and National Highway-Runoff and Urban-Stormwater Data

    USGS Publications Warehouse

    Tasker, Gary D.; Granato, Gregory E.

    2000-01-01

    Decision makers need viable methods for the interpretation of local, regional, and national-highway runoff and urban-stormwater data including flows, concentrations and loads of chemical constituents and sediment, potential effects on receiving waters, and the potential effectiveness of various best management practices (BMPs). Valid (useful for intended purposes), current, and technically defensible stormwater-runoff models are needed to interpret data collected in field studies, to support existing highway and urban-runoffplanning processes, to meet National Pollutant Discharge Elimination System (NPDES) requirements, and to provide methods for computation of Total Maximum Daily Loads (TMDLs) systematically and economically. Historically, conceptual, simulation, empirical, and statistical models of varying levels of detail, complexity, and uncertainty have been used to meet various data-quality objectives in the decision-making processes necessary for the planning, design, construction, and maintenance of highways and for other land-use applications. Water-quality simulation models attempt a detailed representation of the physical processes and mechanisms at a given site. Empirical and statistical regional water-quality assessment models provide a more general picture of water quality or changes in water quality over a region. All these modeling techniques share one common aspect-their predictive ability is poor without suitable site-specific data for calibration. To properly apply the correct model, one must understand the classification of variables, the unique characteristics of water-resources data, and the concept of population structure and analysis. Classifying variables being used to analyze data may determine which statistical methods are appropriate for data analysis. An understanding of the characteristics of water-resources data is necessary to evaluate the applicability of different statistical methods, to interpret the results of these techniques, and to use tools and techniques that account for the unique nature of water-resources data sets. Populations of data on stormwater-runoff quantity and quality are often best modeled as logarithmic transformations. Therefore, these factors need to be considered to form valid, current, and technically defensible stormwater-runoff models. Regression analysis is an accepted method for interpretation of water-resources data and for prediction of current or future conditions at sites that fit the input data model. Regression analysis is designed to provide an estimate of the average response of a system as it relates to variation in one or more known variables. To produce valid models, however, regression analysis should include visual analysis of scatterplots, an examination of the regression equation, evaluation of the method design assumptions, and regression diagnostics. A number of statistical techniques are described in the text and in the appendixes to provide information necessary to interpret data by use of appropriate methods. Uncertainty is an important part of any decisionmaking process. In order to deal with uncertainty problems, the analyst needs to know the severity of the statistical uncertainty of the methods used to predict water quality. Statistical models need to be based on information that is meaningful, representative, complete, precise, accurate, and comparable to be deemed valid, up to date, and technically supportable. To assess uncertainty in the analytical tools, the modeling methods, and the underlying data set, all of these components need be documented and communicated in an accessible format within project publications.

  19. DOE interpretations Guide to OSH standards. Update to the Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-03-31

    Reflecting Secretary O`Leary`s focus on occupational safety and health, the Office of Occupational Safety is pleased to provide you with the latest update to the DOE Interpretations Guide to OSH Standards. This Guide was developed in cooperation with the Occupational Safety and Health Administration, which continued it`s support during this last revision by facilitating access to the interpretations found on the OSHA Computerized Information System (OCIS). This March 31, 1994 update contains 123 formal in letter written by OSHA. As a result of the unique requests received by the 1-800 Response Line, this update also contains 38 interpretations developed bymore » DOE. This new occupational safety and health information adds still more important guidance to the four volume reference set that you presently have in your possession.« less

  20. The estimation of the measurement results with using statistical methods

    NASA Astrophysics Data System (ADS)

    Velychko, O.; Gordiyenko, T.

    2015-02-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.

  1. Making Decisions with Data: Are We Environmentally Friendly?

    ERIC Educational Resources Information Center

    English, Lyn; Watson, Jane

    2016-01-01

    Statistical literacy is a vital component of numeracy. Students need to learn to critically evaluate and interpret statistical information if they are to become informed citizens. This article examines a Year 5 unit of work that uses the data collection and analysis cycle within a sustainability context.

  2. 76 FR 17191 - Staff Accounting Bulletin No. 114

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-28

    ...This Staff Accounting Bulletin (SAB) revises or rescinds portions of the interpretive guidance included in the codification of the Staff Accounting Bulletin Series. This update is intended to make the relevant interpretive guidance consistent with current authoritative accounting guidance issued as part of the Financial Accounting Standards Board's Accounting Standards Codification. The principal changes involve revision or removal of accounting guidance references and other conforming changes to ensure consistency of referencing throughout the SAB Series.

  3. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    PubMed

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  4. A Generalized Approach for the Interpretation of Geophysical Well Logs in Ground-Water Studies - Theory and Application

    USGS Publications Warehouse

    Paillet, Frederick L.; Crowder, R.E.

    1996-01-01

    Quantitative analysis of geophysical logs in ground-water studies often involves at least as broad a range of applications and variation in lithology as is typically encountered in petroleum exploration, making such logs difficult to calibrate and complicating inversion problem formulation. At the same time, data inversion and analysis depend on inversion model formulation and refinement, so that log interpretation cannot be deferred to a geophysical log specialist unless active involvement with interpretation can be maintained by such an expert over the lifetime of the project. We propose a generalized log-interpretation procedure designed to guide hydrogeologists in the interpretation of geophysical logs, and in the integration of log data into ground-water models that may be systematically refined and improved in an iterative way. The procedure is designed to maximize the effective use of three primary contributions from geophysical logs: (1) The continuous depth scale of the measurements along the well bore; (2) The in situ measurement of lithologic properties and the correlation with hydraulic properties of the formations over a finite sample volume; and (3) Multiple independent measurements that can potentially be inverted for multiple physical or hydraulic properties of interest. The approach is formulated in the context of geophysical inversion theory, and is designed to be interfaced with surface geophysical soundings and conventional hydraulic testing. The step-by-step procedures given in our generalized interpretation and inversion technique are based on both qualitative analysis designed to assist formulation of the interpretation model, and quantitative analysis used to assign numerical values to model parameters. The approach bases a decision as to whether quantitative inversion is statistically warranted by formulating an over-determined inversion. If no such inversion is consistent with the inversion model, quantitative inversion is judged not possible with the given data set. Additional statistical criteria such as the statistical significance of regressions are used to guide the subsequent calibration of geophysical data in terms of hydraulic variables in those situations where quantitative data inversion is considered appropriate.

  5. Children's misunderstandings of hazard warning signs in the new globally harmonized system for classification and labeling.

    PubMed

    Latham, Garry; Long, Tony; Devitt, Patric

    2013-12-01

    Accidental chemical poisoning causes more than 35 000 child deaths every year across the world, and it leads to disease, disability, and suffering for many more children. Children's ignorance of dangers and their failure to interpret hazard warning signs as intended contribute significantly to this problem. A new Globally Harmonized System for Classification and Labeling is being implemented internationally with a view to unifying the current multiple and disparate national systems. This study was designed to establish a productive, effective means of teaching the new GHS warning signs to primary school children (aged 7-11 years). A pre-test, post-test, follow-up test design was employed, with a teaching intervention informed by a Delphi survey of expert opinion. Children from one school formed the experimental group (n = 49) and a second school provided a control group (n = 23). Both groups showed a gain in knowledge from pre-test to post-test, the experimental group with a larger gain but which was not statistically significant. However, longer-term retention of knowledge, as shown by the follow-up test, was statistically significantly greater in the experimental group (p = 0.001). The employment of teaching to match children's preferred learning styles, and the use of active learning were found to be related to improved retention of knowledge. Part of the study involved eliciting children's interpretation of standard hazard warning symbols, and this provoked considerable concern over the potential for dangerous misinterpretation with disastrous consequences. This article focuses on the reasons for such misconception and the action required to address this successfully in testing the intervention.

  6. Validations of a portable home sleep study with twelve-lead polysomnography: comparisons and insights into a variable gold standard.

    PubMed

    Michaelson, Peter G; Allan, Patrick; Chaney, John; Mair, Eric A

    2006-11-01

    Accurate and timely diagnosis for patients with obstructive sleep apnea (OSA) is imperative. Unfortunately, growing interest in this diagnosis has resulted in increased requests and waiting times for polysomnography (PSG), as well as a potential delay in diagnosis and treatment. This study evaluated the accuracy and viability of utilizing SNAP (SNAP Laboratories, LLC, Wheeling, Illinois), a portable home sleep test, as an alternative to traditional PSG in diagnosing OSA. This prospective clinical trial included 59 patients evaluated at our institution's sleep laboratory. Concurrent PSG and SNAP testing was performed for 1 night on each patient. Independent, blinded readers at our institution and at an outside-accredited institution read the PSG data, and 2 independent, blinded readers interpreted the SNAP data at SNAP laboratories. The apnea-hypopnea index (AHI) was used to compare the 2 testing modalities. The correlation coefficient, receiver operating characteristic curve analysis, and the Bland-Altman curves, as well as sensitivity, specificity, inter-reader variability, positive predictive value, and negative predictive value, were used to compare SNAP and PSG. There is a definitive, statistically sound correlation between the AHIs determined from both PSG and SNAP. This relationship holds true for all measures of comparison, while displaying a concerning, weaker correlation between the different PSG interpretations. There is a convincing correlation between the study-determined AHIs of both PSG and SNAP. This finding supports SNAP as a suitable alternative to PSG in identifying OSA, while accentuating the inherent variation present in a PSG-derived AHI. This test expands the diagnostic and therapeutic prowess of the practicing otolaryngologist by offering an alternative OSA testing modality that is associated with not only less expense, decreased waiting time, and increased convenience, but also statistically proven accuracy.

  7. ‘N-of-1-pathways’ unveils personal deregulated mechanisms from a single pair of RNA-Seq samples: towards precision medicine

    PubMed Central

    Gardeux, Vincent; Achour, Ikbel; Li, Jianrong; Maienschein-Cline, Mark; Li, Haiquan; Pesce, Lorenzo; Parinandi, Gurunadh; Bahroos, Neil; Winn, Robert; Foster, Ian; Garcia, Joe G N; Lussier, Yves A

    2014-01-01

    Background The emergence of precision medicine allowed the incorporation of individual molecular data into patient care. Indeed, DNA sequencing predicts somatic mutations in individual patients. However, these genetic features overlook dynamic epigenetic and phenotypic response to therapy. Meanwhile, accurate personal transcriptome interpretation remains an unmet challenge. Further, N-of-1 (single-subject) efficacy trials are increasingly pursued, but are underpowered for molecular marker discovery. Method ‘N-of-1-pathways’ is a global framework relying on three principles: (i) the statistical universe is a single patient; (ii) significance is derived from geneset/biomodules powered by paired samples from the same patient; and (iii) similarity between genesets/biomodules assesses commonality and differences, within-study and cross-studies. Thus, patient gene-level profiles are transformed into deregulated pathways. From RNA-Seq of 55 lung adenocarcinoma patients, N-of-1-pathways predicts the deregulated pathways of each patient. Results Cross-patient N-of-1-pathways obtains comparable results with conventional genesets enrichment analysis (GSEA) and differentially expressed gene (DEG) enrichment, validated in three external evaluations. Moreover, heatmap and star plots highlight both individual and shared mechanisms ranging from molecular to organ-systems levels (eg, DNA repair, signaling, immune response). Patients were ranked based on the similarity of their deregulated mechanisms to those of an independent gold standard, generating unsupervised clusters of diametric extreme survival phenotypes (p=0.03). Conclusions The N-of-1-pathways framework provides a robust statistical and relevant biological interpretation of individual disease-free survival that is often overlooked in conventional cross-patient studies. It enables mechanism-level classifiers with smaller cohorts as well as N-of-1 studies. Software http://lussierlab.org/publications/N-of-1-pathways PMID:25301808

  8. ‘N-of-1- pathways ’ unveils personal deregulated mechanisms from a single pair of RNA-Seq samples: Towards precision medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardeux, Vincent; Achour, Ikbel; Li, Jianrong

    Background: The emergence of precision medicine allowed the incorporation of individual molecular data into patient care. This research entails, DNA sequencing predicts somatic mutations in individual patients. However, these genetic features overlook dynamic epigenetic and phenotypic response to therapy. Meanwhile, accurate personal transcriptome interpretation remains an unmet challenge. Further, N-of-1 (single-subject) efficacy trials are increasingly pursued, but are underpowered for molecular marker discovery. Method: ‘N-of-1- pathways’ is a global framework relying on three principles: (i) the statistical universe is a single patient; (ii) significance is derived from geneset/biomodules powered by paired samples from the same patient; and (iii) similarity betweenmore » genesets/biomodules assesses commonality and differences, within-study and cross-studies. Thus, patient gene-level profiles are transformed into deregulated pathways. From RNA-Seq of 55 lung adenocarcinoma patients, N-of-1- pathways predicts the deregulated pathways of each patient. Results: Cross-patient N-of-1- pathways obtains comparable results with conventional genesets enrichment analysis (GSEA) and differentially expressed gene (DEG) enrichment, validated in three external evaluations. Moreover, heatmap and star plots highlight both individual and shared mechanisms ranging from molecular to organ-systems levels (eg, DNA repair, signaling, immune response). Patients were ranked based on the similarity of their deregulated mechanisms to those of an independent gold standard, generating unsupervised clusters of diametric extreme survival phenotypes (p=0.03). Conclusions: The N-of-1- pathways framework provides a robust statistical and relevant biological interpretation of individual disease-free survival that is often overlooked in conventional cross-patient studies. It enables mechanism-level classifiers with smaller cohorts as well as N-of-1 studies.« less

  9. ‘N-of-1- pathways ’ unveils personal deregulated mechanisms from a single pair of RNA-Seq samples: Towards precision medicine

    DOE PAGES

    Gardeux, Vincent; Achour, Ikbel; Li, Jianrong; ...

    2014-11-01

    Background: The emergence of precision medicine allowed the incorporation of individual molecular data into patient care. This research entails, DNA sequencing predicts somatic mutations in individual patients. However, these genetic features overlook dynamic epigenetic and phenotypic response to therapy. Meanwhile, accurate personal transcriptome interpretation remains an unmet challenge. Further, N-of-1 (single-subject) efficacy trials are increasingly pursued, but are underpowered for molecular marker discovery. Method: ‘N-of-1- pathways’ is a global framework relying on three principles: (i) the statistical universe is a single patient; (ii) significance is derived from geneset/biomodules powered by paired samples from the same patient; and (iii) similarity betweenmore » genesets/biomodules assesses commonality and differences, within-study and cross-studies. Thus, patient gene-level profiles are transformed into deregulated pathways. From RNA-Seq of 55 lung adenocarcinoma patients, N-of-1- pathways predicts the deregulated pathways of each patient. Results: Cross-patient N-of-1- pathways obtains comparable results with conventional genesets enrichment analysis (GSEA) and differentially expressed gene (DEG) enrichment, validated in three external evaluations. Moreover, heatmap and star plots highlight both individual and shared mechanisms ranging from molecular to organ-systems levels (eg, DNA repair, signaling, immune response). Patients were ranked based on the similarity of their deregulated mechanisms to those of an independent gold standard, generating unsupervised clusters of diametric extreme survival phenotypes (p=0.03). Conclusions: The N-of-1- pathways framework provides a robust statistical and relevant biological interpretation of individual disease-free survival that is often overlooked in conventional cross-patient studies. It enables mechanism-level classifiers with smaller cohorts as well as N-of-1 studies.« less

  10. Analysis of filament statistics in fast camera data on MAST

    NASA Astrophysics Data System (ADS)

    Farley, Tom; Militello, Fulvio; Walkden, Nick; Harrison, James; Silburn, Scott; Bradley, James

    2017-10-01

    Coherent filamentary structures have been shown to play a dominant role in turbulent cross-field particle transport [D'Ippolito 2011]. An improved understanding of filaments is vital in order to control scrape off layer (SOL) density profiles and thus control first wall erosion, impurity flushing and coupling of radio frequency heating in future devices. The Elzar code [T. Farley, 2017 in prep.] is applied to MAST data. The code uses information about the magnetic equilibrium to calculate the intensity of light emission along field lines as seen in the camera images, as a function of the field lines' radial and toroidal locations at the mid-plane. In this way a `pseudo-inversion' of the intensity profiles in the camera images is achieved from which filaments can be identified and measured. In this work, a statistical analysis of the intensity fluctuations along field lines in the camera field of view is performed using techniques similar to those typically applied in standard Langmuir probe analyses. These filament statistics are interpreted in terms of the theoretical ergodic framework presented by F. Militello & J.T. Omotani, 2016, in order to better understand how time averaged filament dynamics produce the more familiar SOL density profiles. This work has received funding from the RCUK Energy programme (Grant Number EP/P012450/1), from Euratom (Grant Agreement No. 633053) and from the EUROfusion consortium.

  11. A scan statistic to extract causal gene clusters from case-control genome-wide rare CNV data.

    PubMed

    Nishiyama, Takeshi; Takahashi, Kunihiko; Tango, Toshiro; Pinto, Dalila; Scherer, Stephen W; Takami, Satoshi; Kishino, Hirohisa

    2011-05-26

    Several statistical tests have been developed for analyzing genome-wide association data by incorporating gene pathway information in terms of gene sets. Using these methods, hundreds of gene sets are typically tested, and the tested gene sets often overlap. This overlapping greatly increases the probability of generating false positives, and the results obtained are difficult to interpret, particularly when many gene sets show statistical significance. We propose a flexible statistical framework to circumvent these problems. Inspired by spatial scan statistics for detecting clustering of disease occurrence in the field of epidemiology, we developed a scan statistic to extract disease-associated gene clusters from a whole gene pathway. Extracting one or a few significant gene clusters from a global pathway limits the overall false positive probability, which results in increased statistical power, and facilitates the interpretation of test results. In the present study, we applied our method to genome-wide association data for rare copy-number variations, which have been strongly implicated in common diseases. Application of our method to a simulated dataset demonstrated the high accuracy of this method in detecting disease-associated gene clusters in a whole gene pathway. The scan statistic approach proposed here shows a high level of accuracy in detecting gene clusters in a whole gene pathway. This study has provided a sound statistical framework for analyzing genome-wide rare CNV data by incorporating topological information on the gene pathway.

  12. 78 FR 65317 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-31

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS) Subcommittee on Standards. Time and Date: November 12, 2013 8:30 a.m.-5:30 p.m. EST. Place: Centers for Disease Control and Prevention, National Center for Health Statistics, 3311...

  13. 78 FR 54470 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-04

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS) Subcommittee on Standards Time and Date: September 18, 2013 8:30 p.m.--5:00 p.m. EDT. Place: Centers for Disease Control and Prevention, National Center for Health Statistics, 3311...

  14. 78 FR 942 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-07

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS) Subcommittee on Standards. Time and Date: February 27, 2013 9:30 a.m.-5:00 p.m... electronic claims attachments. The National Committee on Vital Health Statistics is the public advisory body...

  15. 78 FR 34100 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-06

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS) Subcommittee on Standards. Time and Date: June 17, 2013 1:00 p.m.-5:00 p.m. e.d..., National Center for Health Statistics, 3311 Toledo Road, Auditorium B & C, Hyattsville, Maryland 20782...

  16. Ultrasound criteria and guided fine-needle aspiration diagnostic yields in small animal peritoneal, mesenteric and omental disease.

    PubMed

    Feeney, Daniel A; Ober, Christopher P; Snyder, Laura A; Hill, Sara A; Jessen, Carl R

    2013-01-01

    Peritoneal, mesenteric, and omental diseases are important causes of morbidity and mortality in humans and animals, although information in the veterinary literature is limited. The purposes of this retrospective study were to determine whether objectively applied ultrasound interpretive criteria are statistically useful in differentiating among cytologically defined normal, inflammatory, and neoplastic peritoneal conditions in dogs and cats. A second goal was to determine the cytologically interpretable yield on ultrasound-guided, fine-needle sampling of peritoneal, mesenteric, or omental structures. Sonographic criteria agreed upon by the authors were retrospectively and independently applied by two radiologists to the available ultrasound images without knowledge of the cytologic diagnosis and statistically compared to the ultrasound-guided, fine-needle aspiration cytologic interpretations. A total of 72 dogs and 49 cats with abdominal peritoneal, mesenteric, or omental (peritoneal) surface or effusive disease and 17 dogs and 3 cats with no cytologic evidence of inflammation or neoplasia were included. The optimized, ultrasound criteria-based statistical model created independently for each radiologist yielded an equation-based diagnostic category placement accuracy of 63.2-69.9% across the two involved radiologists. Regional organ-associated masses or nodules as well as aggregated bowel and peritoneal thickening were more associated with peritoneal neoplasia whereas localized, severely complex fluid collections were more associated with inflammatory peritoneal disease. The cytologically interpretable yield for ultrasound-guided fine-needle sampling was 72.3% with no difference between species, making this a worthwhile clinical procedure. © 2013 Veterinary Radiology & Ultrasound.

  17. Translators and Interpreters Certification in Australia, Canada, the USA and Ukraine: Comparative Analysis

    ERIC Educational Resources Information Center

    Skyba, Kateryna

    2014-01-01

    The article presents an overview of the certification process by which potential translators and interpreters demonstrate minimum standards of performance to warrant official or professional recognition of their ability to translate or interpret and to practice professionally in Australia, Canada, the USA and Ukraine. The aim of the study is to…

  18. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    PubMed

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. Statistical analysis of water-quality data containing multiple detection limits II: S-language software for nonparametric distribution modeling and hypothesis testing

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2007-01-01

    Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.

  20. On Teaching about the Coefficient of Variation in Introductory Statistics Courses

    ERIC Educational Resources Information Center

    Trafimow, David

    2014-01-01

    The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.

Top