Sample records for easily interpretable statistics

  1. Student's Conceptions in Statistical Graph's Interpretation

    ERIC Educational Resources Information Center

    Kukliansky, Ida

    2016-01-01

    Histograms, box plots and cumulative distribution graphs are popular graphic representations for statistical distributions. The main research question that this study focuses on is how college students deal with interpretation of these statistical graphs when translating graphical representations into analytical concepts in descriptive statistics.…

  2. Statistical interpretation of machine learning-based feature importance scores for biomarker discovery.

    PubMed

    Huynh-Thu, Vân Anh; Saeys, Yvan; Wehenkel, Louis; Geurts, Pierre

    2012-07-01

    Univariate statistical tests are widely used for biomarker discovery in bioinformatics. These procedures are simple, fast and their output is easily interpretable by biologists but they can only identify variables that provide a significant amount of information in isolation from the other variables. As biological processes are expected to involve complex interactions between variables, univariate methods thus potentially miss some informative biomarkers. Variable relevance scores provided by machine learning techniques, however, are potentially able to highlight multivariate interacting effects, but unlike the p-values returned by univariate tests, these relevance scores are usually not statistically interpretable. This lack of interpretability hampers the determination of a relevance threshold for extracting a feature subset from the rankings and also prevents the wide adoption of these methods by practicians. We evaluated several, existing and novel, procedures that extract relevant features from rankings derived from machine learning approaches. These procedures replace the relevance scores with measures that can be interpreted in a statistical way, such as p-values, false discovery rates, or family wise error rates, for which it is easier to determine a significance level. Experiments were performed on several artificial problems as well as on real microarray datasets. Although the methods differ in terms of computing times and the tradeoff, they achieve in terms of false positives and false negatives, some of them greatly help in the extraction of truly relevant biomarkers and should thus be of great practical interest for biologists and physicians. As a side conclusion, our experiments also clearly highlight that using model performance as a criterion for feature selection is often counter-productive. Python source codes of all tested methods, as well as the MATLAB scripts used for data simulation, can be found in the Supplementary Material.

  3. Interpretation of statistical results.

    PubMed

    García Garmendia, J L; Maroto Monserrat, F

    2018-02-21

    The appropriate interpretation of the statistical results is crucial to understand the advances in medical science. The statistical tools allow us to transform the uncertainty and apparent chaos in nature to measurable parameters which are applicable to our clinical practice. The importance of understanding the meaning and actual extent of these instruments is essential for researchers, the funders of research and for professionals who require a permanent update based on good evidence and supports to decision making. Various aspects of the designs, results and statistical analysis are reviewed, trying to facilitate his comprehension from the basics to what is most common but no better understood, and bringing a constructive, non-exhaustive but realistic look. Copyright © 2018 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.

  4. Combinatorial interpretation of Haldane-Wu fractional exclusion statistics.

    PubMed

    Aringazin, A K; Mazhitov, M I

    2002-08-01

    Assuming that the maximal allowed number of identical particles in a state is an integer parameter, q, we derive the statistical weight and analyze the associated equation that defines the statistical distribution. The derived distribution covers Fermi-Dirac and Bose-Einstein ones in the particular cases q=1 and q--> infinity (n(i)/q-->1), respectively. We show that the derived statistical weight provides a natural combinatorial interpretation of Haldane-Wu fractional exclusion statistics, and present exact solutions of the distribution equation.

  5. Statistical Knowledge and the Over-Interpretation of Student Evaluations of Teaching

    ERIC Educational Resources Information Center

    Boysen, Guy A.

    2017-01-01

    Research shows that teachers interpret small differences in student evaluations of teaching as meaningful even when available statistical information indicates that the differences are not reliable. The current research explored the effect of statistical training on college teachers' tendency to over-interpret student evaluation differences. A…

  6. Interpretingstatistical hypothesis testing” results in clinical research

    PubMed Central

    Sarmukaddam, Sanjeev B.

    2012-01-01

    Difference between “Clinical Significance and Statistical Significance” should be kept in mind while interpretingstatistical hypothesis testing” results in clinical research. This fact is already known to many but again pointed out here as philosophy of “statistical hypothesis testing” is sometimes unnecessarily criticized mainly due to failure in considering such distinction. Randomized controlled trials are also wrongly criticized similarly. Some scientific method may not be applicable in some peculiar/particular situation does not mean that the method is useless. Also remember that “statistical hypothesis testing” is not for decision making and the field of “decision analysis” is very much an integral part of science of statistics. It is not correct to say that “confidence intervals have nothing to do with confidence” unless one understands meaning of the word “confidence” as used in context of confidence interval. Interpretation of the results of every study should always consider all possible alternative explanations like chance, bias, and confounding. Statistical tests in inferential statistics are, in general, designed to answer the question “How likely is the difference found in random sample(s) is due to chance” and therefore limitation of relying only on statistical significance in making clinical decisions should be avoided. PMID:22707861

  7. Equivalent statistics and data interpretation.

    PubMed

    Francis, Gregory

    2017-08-01

    Recent reform efforts in psychological science have led to a plethora of choices for scientists to analyze their data. A scientist making an inference about their data must now decide whether to report a p value, summarize the data with a standardized effect size and its confidence interval, report a Bayes Factor, or use other model comparison methods. To make good choices among these options, it is necessary for researchers to understand the characteristics of the various statistics used by the different analysis frameworks. Toward that end, this paper makes two contributions. First, it shows that for the case of a two-sample t test with known sample sizes, many different summary statistics are mathematically equivalent in the sense that they are based on the very same information in the data set. When the sample sizes are known, the p value provides as much information about a data set as the confidence interval of Cohen's d or a JZS Bayes factor. Second, this equivalence means that different analysis methods differ only in their interpretation of the empirical data. At first glance, it might seem that mathematical equivalence of the statistics suggests that it does not matter much which statistic is reported, but the opposite is true because the appropriateness of a reported statistic is relative to the inference it promotes. Accordingly, scientists should choose an analysis method appropriate for their scientific investigation. A direct comparison of the different inferential frameworks provides some guidance for scientists to make good choices and improve scientific practice.

  8. For a statistical interpretation of Helmholtz' thermal displacement

    NASA Astrophysics Data System (ADS)

    Podio-Guidugli, Paolo

    2016-11-01

    On moving from the classic papers by Einstein and Langevin on Brownian motion, two consistent statistical interpretations are given for the thermal displacement, a scalar field formally introduced by Helmholtz, whose time derivative is by definition the absolute temperature.

  9. Interpreting statistics of small lunar craters

    NASA Technical Reports Server (NTRS)

    Schultz, P. H.; Gault, D.; Greeley, R.

    1977-01-01

    Some of the wide variations in the crater-size distributions in lunar photography and in the resulting statistics were interpreted as different degradation rates on different surfaces, different scaling laws in different targets, and a possible population of endogenic craters. These possibilities are reexamined for statistics of 26 different regions. In contrast to most other studies, crater diameters as small as 5 m were measured from enlarged Lunar Orbiter framelets. According to the results of the reported analysis, the different crater distribution types appear to be most consistent with the hypotheses of differential degradation and a superposed crater population. Differential degradation can account for the low level of equilibrium in incompetent materials such as ejecta deposits, mantle deposits, and deep regoliths where scaling law changes and catastrophic processes introduce contradictions with other observations.

  10. The Statistical Interpretation of Classical Thermodynamic Heating and Expansion Processes

    ERIC Educational Resources Information Center

    Cartier, Stephen F.

    2011-01-01

    A statistical model has been developed and applied to interpret thermodynamic processes typically presented from the macroscopic, classical perspective. Through this model, students learn and apply the concepts of statistical mechanics, quantum mechanics, and classical thermodynamics in the analysis of the (i) constant volume heating, (ii)…

  11. Statistical analysis and interpretation of prenatal diagnostic imaging studies, Part 2: descriptive and inferential statistical methods.

    PubMed

    Tuuli, Methodius G; Odibo, Anthony O

    2011-08-01

    The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.

  12. Interpretation of Statistical Data: The Importance of Affective Expressions

    ERIC Educational Resources Information Center

    Queiroz, Tamires; Monteiro, Carlos; Carvalho, Liliane; François, Karen

    2017-01-01

    In recent years, research on teaching and learning of statistics emphasized that the interpretation of data is a complex process that involves cognitive and technical aspects. However, it is a human activity that involves also contextual and affective aspects. This view is in line with research on affectivity and cognition. While the affective…

  13. Interpretation of the results of statistical measurements. [search for basic probability model

    NASA Technical Reports Server (NTRS)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  14. Workplace statistical literacy for teachers: interpreting box plots

    NASA Astrophysics Data System (ADS)

    Pierce, Robyn; Chick, Helen

    2013-06-01

    As a consequence of the increased use of data in workplace environments, there is a need to understand the demands that are placed on users to make sense of such data. In education, teachers are being increasingly expected to interpret and apply complex data about student and school performance, and, yet it is not clear that they always have the appropriate knowledge and experience to interpret the graphs, tables and other data that they receive. This study examined the statistical literacy demands placed on teachers, with a particular focus on box plot representations. Although box plots summarise the data in a way that makes visual comparisons possible across sets of data, this study showed that teachers do not always have the necessary fluency with the representation to describe correctly how the data are distributed in the representation. In particular, a significant number perceived the size of the regions of the box plot to be depicting frequencies rather than density, and there were misconceptions associated with outlying data that were not displayed on the plot. As well, teachers' perceptions of box plots were found to relate to three themes: attitudes, perceived value and misconceptions.

  15. Material Phase Causality or a Dynamics-Statistical Interpretation of Quantum Mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koprinkov, I. G.

    2010-11-25

    The internal phase dynamics of a quantum system interacting with an electromagnetic field is revealed in details. Theoretical and experimental evidences of a causal relation of the phase of the wave function to the dynamics of the quantum system are presented sistematically for the first time. A dynamics-statistical interpretation of the quantum mechanics is introduced.

  16. Statistics Refresher for Molecular Imaging Technologists, Part 2: Accuracy of Interpretation, Significance, and Variance.

    PubMed

    Farrell, Mary Beth

    2018-06-01

    This article is the second part of a continuing education series reviewing basic statistics that nuclear medicine and molecular imaging technologists should understand. In this article, the statistics for evaluating interpretation accuracy, significance, and variance are discussed. Throughout the article, actual statistics are pulled from the published literature. We begin by explaining 2 methods for quantifying interpretive accuracy: interreader and intrareader reliability. Agreement among readers can be expressed simply as a percentage. However, the Cohen κ-statistic is a more robust measure of agreement that accounts for chance. The higher the κ-statistic is, the higher is the agreement between readers. When 3 or more readers are being compared, the Fleiss κ-statistic is used. Significance testing determines whether the difference between 2 conditions or interventions is meaningful. Statistical significance is usually expressed using a number called a probability ( P ) value. Calculation of P value is beyond the scope of this review. However, knowing how to interpret P values is important for understanding the scientific literature. Generally, a P value of less than 0.05 is considered significant and indicates that the results of the experiment are due to more than just chance. Variance, standard deviation (SD), confidence interval, and standard error (SE) explain the dispersion of data around a mean of a sample drawn from a population. SD is commonly reported in the literature. A small SD indicates that there is not much variation in the sample data. Many biologic measurements fall into what is referred to as a normal distribution taking the shape of a bell curve. In a normal distribution, 68% of the data will fall within 1 SD, 95% will fall within 2 SDs, and 99.7% will fall within 3 SDs. Confidence interval defines the range of possible values within which the population parameter is likely to lie and gives an idea of the precision of the statistic being

  17. Menzerath-Altmann Law: Statistical Mechanical Interpretation as Applied to a Linguistic Organization

    NASA Astrophysics Data System (ADS)

    Eroglu, Sertac

    2014-10-01

    The distribution behavior described by the empirical Menzerath-Altmann law is frequently encountered during the self-organization of linguistic and non-linguistic natural organizations at various structural levels. This study presents a statistical mechanical derivation of the law based on the analogy between the classical particles of a statistical mechanical organization and the distinct words of a textual organization. The derived model, a transformed (generalized) form of the Menzerath-Altmann model, was termed as the statistical mechanical Menzerath-Altmann model. The derived model allows interpreting the model parameters in terms of physical concepts. We also propose that many organizations presenting the Menzerath-Altmann law behavior, whether linguistic or not, can be methodically examined by the transformed distribution model through the properly defined structure-dependent parameter and the energy associated states.

  18. Statistical transformation and the interpretation of inpatient glucose control data.

    PubMed

    Saulnier, George E; Castro, Janna C; Cook, Curtiss B

    2014-03-01

    To introduce a statistical method of assessing hospital-based non-intensive care unit (non-ICU) inpatient glucose control. Point-of-care blood glucose (POC-BG) data from hospital non-ICUs were extracted for January 1 through December 31, 2011. Glucose data distribution was examined before and after Box-Cox transformations and compared to normality. Different subsets of data were used to establish upper and lower control limits, and exponentially weighted moving average (EWMA) control charts were constructed from June, July, and October data as examples to determine if out-of-control events were identified differently in nontransformed versus transformed data. A total of 36,381 POC-BG values were analyzed. In all 3 monthly test samples, glucose distributions in nontransformed data were skewed but approached a normal distribution once transformed. Interpretation of out-of-control events from EWMA control chart analyses also revealed differences. In the June test data, an out-of-control process was identified at sample 53 with nontransformed data, whereas the transformed data remained in control for the duration of the observed period. Analysis of July data demonstrated an out-of-control process sooner in the transformed (sample 55) than nontransformed (sample 111) data, whereas for October, transformed data remained in control longer than nontransformed data. Statistical transformations increase the normal behavior of inpatient non-ICU glycemic data sets. The decision to transform glucose data could influence the interpretation and conclusions about the status of inpatient glycemic control. Further study is required to determine whether transformed versus nontransformed data influence clinical decisions or evaluation of interventions.

  19. New physicochemical interpretations for the adsorption of food dyes on chitosan films using statistical physics treatment.

    PubMed

    Dotto, G L; Pinto, L A A; Hachicha, M A; Knani, S

    2015-03-15

    In this work, statistical physics treatment was employed to study the adsorption of food dyes onto chitosan films, in order to obtain new physicochemical interpretations at molecular level. Experimental equilibrium curves were obtained for the adsorption of four dyes (FD&C red 2, FD&C yellow 5, FD&C blue 2, Acid Red 51) at different temperatures (298, 313 and 328 K). A statistical physics formula was used to interpret these curves, and the parameters such as, number of adsorbed dye molecules per site (n), anchorage number (n'), receptor sites density (NM), adsorbed quantity at saturation (N asat), steric hindrance (τ), concentration at half saturation (c1/2) and molar adsorption energy (ΔE(a)) were estimated. The relation of the above mentioned parameters with the chemical structure of the dyes and temperature was evaluated and interpreted. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Evaluation of forensic DNA mixture evidence: protocol for evaluation, interpretation, and statistical calculations using the combined probability of inclusion.

    PubMed

    Bieber, Frederick R; Buckleton, John S; Budowle, Bruce; Butler, John M; Coble, Michael D

    2016-08-31

    The evaluation and interpretation of forensic DNA mixture evidence faces greater interpretational challenges due to increasingly complex mixture evidence. Such challenges include: casework involving low quantity or degraded evidence leading to allele and locus dropout; allele sharing of contributors leading to allele stacking; and differentiation of PCR stutter artifacts from true alleles. There is variation in statistical approaches used to evaluate the strength of the evidence when inclusion of a specific known individual(s) is determined, and the approaches used must be supportable. There are concerns that methods utilized for interpretation of complex forensic DNA mixtures may not be implemented properly in some casework. Similar questions are being raised in a number of U.S. jurisdictions, leading to some confusion about mixture interpretation for current and previous casework. Key elements necessary for the interpretation and statistical evaluation of forensic DNA mixtures are described. Given the most common method for statistical evaluation of DNA mixtures in many parts of the world, including the USA, is the Combined Probability of Inclusion/Exclusion (CPI/CPE). Exposition and elucidation of this method and a protocol for use is the focus of this article. Formulae and other supporting materials are provided. Guidance and details of a DNA mixture interpretation protocol is provided for application of the CPI/CPE method in the analysis of more complex forensic DNA mixtures. This description, in turn, should help reduce the variability of interpretation with application of this methodology and thereby improve the quality of DNA mixture interpretation throughout the forensic community.

  1. Statistical Literacy: High School Students in Reading, Interpreting and Presenting Data

    NASA Astrophysics Data System (ADS)

    Hafiyusholeh, M.; Budayasa, K.; Siswono, T. Y. E.

    2018-01-01

    One of the foundations for high school students in statistics is to be able to read data; presents data in the form of tables and diagrams and its interpretation. The purpose of this study is to describe high school students’ competencies in reading, interpreting and presenting data. Subjects were consisted of male and female students who had high levels of mathematical ability. Collecting data was done in form of task formulation which is analyzed by reducing, presenting and verifying data. Results showed that the students read the data based on explicit explanations on the diagram, such as explaining the points in the diagram as the relation between the x and y axis and determining the simple trend of a graph, including the maximum and minimum point. In interpreting and summarizing the data, both subjects pay attention to general data trends and use them to predict increases or decreases in data. The male estimates the value of the (n+1) of weight data by using the modus of the data, while the females estimate the weigth by using the average. The male tend to do not consider the characteristics of the data, while the female more carefully consider the characteristics of data.

  2. Experimental statistics for biological sciences.

    PubMed

    Bang, Heejung; Davidian, Marie

    2010-01-01

    In this chapter, we cover basic and fundamental principles and methods in statistics - from "What are Data and Statistics?" to "ANOVA and linear regression," which are the basis of any statistical thinking and undertaking. Readers can easily find the selected topics in most introductory statistics textbooks, but we have tried to assemble and structure them in a succinct and reader-friendly manner in a stand-alone chapter. This text has long been used in real classroom settings for both undergraduate and graduate students who do or do not major in statistical sciences. We hope that from this chapter, readers would understand the key statistical concepts and terminologies, how to design a study (experimental or observational), how to analyze the data (e.g., describe the data and/or estimate the parameter(s) and make inference), and how to interpret the results. This text would be most useful if it is used as a supplemental material, while the readers take their own statistical courses or it would serve as a great reference text associated with a manual for any statistical software as a self-teaching guide.

  3. Statistical Approaches to Interpretation of Local, Regional, and National Highway-Runoff and Urban-Stormwater Data

    USGS Publications Warehouse

    Tasker, Gary D.; Granato, Gregory E.

    2000-01-01

    Decision makers need viable methods for the interpretation of local, regional, and national-highway runoff and urban-stormwater data including flows, concentrations and loads of chemical constituents and sediment, potential effects on receiving waters, and the potential effectiveness of various best management practices (BMPs). Valid (useful for intended purposes), current, and technically defensible stormwater-runoff models are needed to interpret data collected in field studies, to support existing highway and urban-runoffplanning processes, to meet National Pollutant Discharge Elimination System (NPDES) requirements, and to provide methods for computation of Total Maximum Daily Loads (TMDLs) systematically and economically. Historically, conceptual, simulation, empirical, and statistical models of varying levels of detail, complexity, and uncertainty have been used to meet various data-quality objectives in the decision-making processes necessary for the planning, design, construction, and maintenance of highways and for other land-use applications. Water-quality simulation models attempt a detailed representation of the physical processes and mechanisms at a given site. Empirical and statistical regional water-quality assessment models provide a more general picture of water quality or changes in water quality over a region. All these modeling techniques share one common aspect-their predictive ability is poor without suitable site-specific data for calibration. To properly apply the correct model, one must understand the classification of variables, the unique characteristics of water-resources data, and the concept of population structure and analysis. Classifying variables being used to analyze data may determine which statistical methods are appropriate for data analysis. An understanding of the characteristics of water-resources data is necessary to evaluate the applicability of different statistical methods, to interpret the results of these techniques

  4. Statistical significance versus clinical relevance.

    PubMed

    van Rijn, Marieke H C; Bech, Anneke; Bouyer, Jean; van den Brand, Jan A J G

    2017-04-01

    In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P < 0.05 means that the null hypothesis is false, and P ≥0.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  5. Report: New analytical and statistical approaches for interpreting the relationships among environmental stressors and biomarkers

    EPA Science Inventory

    The broad topic of biomarker research has an often-overlooked component: the documentation and interpretation of the surrounding chemical environment and other meta-data, especially from visualization, analytical, and statistical perspectives (Pleil et al. 2014; Sobus et al. 2011...

  6. [Age index and an interpretation of survivorship curves (author's transl)].

    PubMed

    Lohmann, W

    1977-01-01

    Clinical investigations showed that the age dependences of physiological functions do not show -- as generally assumed -- a linear increase with age, but an exponential one. Considering this result one can easily interpret the survivorship curve of a population (Gompertz plot). The only thing that is required is that the probability of death (death rate) is proportional to a function of ageing given by mu(t) = mu0 exp (alpha t). Considering survivorship curves resulting from annual death statistics and fitting them by suitable parameters, then the resulting alpha-values are in agreement with clinical data.

  7. Data Analysis and Statistical Methods for the Assessment and Interpretation of Geochronologic Data

    NASA Astrophysics Data System (ADS)

    Reno, B. L.; Brown, M.; Piccoli, P. M.

    2007-12-01

    Ages are traditionally reported as a weighted mean with an uncertainty based on least squares analysis of analytical error on individual dates. This method does not take into account geological uncertainties, and cannot accommodate asymmetries in the data. In most instances, this method will understate uncertainty on a given age, which may lead to over interpretation of age data. Geologic uncertainty is difficult to quantify, but is typically greater than analytical uncertainty. These factors make traditional statistical approaches inadequate to fully evaluate geochronologic data. We propose a protocol to assess populations within multi-event datasets and to calculate age and uncertainty from each population of dates interpreted to represent a single geologic event using robust and resistant statistical methods. To assess whether populations thought to represent different events are statistically separate exploratory data analysis is undertaken using a box plot, where the range of the data is represented by a 'box' of length given by the interquartile range, divided at the median of the data, with 'whiskers' that extend to the furthest datapoint that lies within 1.5 times the interquartile range beyond the box. If the boxes representing the populations do not overlap, they are interpreted to represent statistically different sets of dates. Ages are calculated from statistically distinct populations using a robust tool such as the tanh method of Kelsey et al. (2003, CMP, 146, 326-340), which is insensitive to any assumptions about the underlying probability distribution from which the data are drawn. Therefore, this method takes into account the full range of data, and is not drastically affected by outliers. The interquartile range of each population of dates (the interquartile range) gives a first pass at expressing uncertainty, which accommodates asymmetry in the dataset; outliers have a minor affect on the uncertainty. To better quantify the uncertainty, a

  8. Intuitive and interpretable visual communication of a complex statistical model of disease progression and risk.

    PubMed

    Jieyi Li; Arandjelovic, Ognjen

    2017-07-01

    Computer science and machine learning in particular are increasingly lauded for their potential to aid medical practice. However, the highly technical nature of the state of the art techniques can be a major obstacle in their usability by health care professionals and thus, their adoption and actual practical benefit. In this paper we describe a software tool which focuses on the visualization of predictions made by a recently developed method which leverages data in the form of large scale electronic records for making diagnostic predictions. Guided by risk predictions, our tool allows the user to explore interactively different diagnostic trajectories, or display cumulative long term prognostics, in an intuitive and easily interpretable manner.

  9. Analysis of statistical misconception in terms of statistical reasoning

    NASA Astrophysics Data System (ADS)

    Maryati, I.; Priatna, N.

    2018-05-01

    Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.

  10. Localized Smart-Interpretation

    NASA Astrophysics Data System (ADS)

    Lundh Gulbrandsen, Mats; Mejer Hansen, Thomas; Bach, Torben; Pallesen, Tom

    2014-05-01

    The complex task of setting up a geological model consists not only of combining available geological information into a conceptual plausible model, but also requires consistency with availably data, e.g. geophysical data. However, in many cases the direct geological information, e.g borehole samples, are very sparse, so in order to create a geological model, the geologist needs to rely on the geophysical data. The problem is however, that the amount of geophysical data in many cases are so vast that it is practically impossible to integrate all of them in the manual interpretation process. This means that a lot of the information available from the geophysical surveys are unexploited, which is a problem, due to the fact that the resulting geological model does not fulfill its full potential and hence are less trustworthy. We suggest an approach to geological modeling that 1. allow all geophysical data to be considered when building the geological model 2. is fast 3. allow quantification of geological modeling. The method is constructed to build a statistical model, f(d,m), describing the relation between what the geologists interpret, d, and what the geologist knows, m. The para- meter m reflects any available information that can be quantified, such as geophysical data, the result of a geophysical inversion, elevation maps, etc... The parameter d reflects an actual interpretation, such as for example the depth to the base of a ground water reservoir. First we infer a statistical model f(d,m), by examining sets of actual interpretations made by a geological expert, [d1, d2, ...], and the information used to perform the interpretation; [m1, m2, ...]. This makes it possible to quantify how the geological expert performs interpolation through f(d,m). As the geological expert proceeds interpreting, the number of interpreted datapoints from which the statistical model is inferred increases, and therefore the accuracy of the statistical model increases. When a model f

  11. Teaching the Assessment of Normality Using Large Easily-Generated Real Data Sets

    ERIC Educational Resources Information Center

    Kulp, Christopher W.; Sprechini, Gene D.

    2016-01-01

    A classroom activity is presented, which can be used in teaching students statistics with an easily generated, large, real world data set. The activity consists of analyzing a video recording of an object. The colour data of the recorded object can then be used as a data set to explore variation in the data using graphs including histograms,…

  12. A flexible, interpretable framework for assessing sensitivity to unmeasured confounding.

    PubMed

    Dorie, Vincent; Harada, Masataka; Carnegie, Nicole Bohme; Hill, Jennifer

    2016-09-10

    When estimating causal effects, unmeasured confounding and model misspecification are both potential sources of bias. We propose a method to simultaneously address both issues in the form of a semi-parametric sensitivity analysis. In particular, our approach incorporates Bayesian Additive Regression Trees into a two-parameter sensitivity analysis strategy that assesses sensitivity of posterior distributions of treatment effects to choices of sensitivity parameters. This results in an easily interpretable framework for testing for the impact of an unmeasured confounder that also limits the number of modeling assumptions. We evaluate our approach in a large-scale simulation setting and with high blood pressure data taken from the Third National Health and Nutrition Examination Survey. The model is implemented as open-source software, integrated into the treatSens package for the R statistical programming language. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  13. The Malpractice of Statistical Interpretation

    ERIC Educational Resources Information Center

    Fraas, John W.; Newman, Isadore

    1978-01-01

    Problems associated with the use of gain scores, analysis of covariance, multicollinearity, part and partial correlation, and the lack of rectilinearity in regression are discussed. Particular attention is paid to the misuse of statistical techniques. (JKS)

  14. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling.

    PubMed

    Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall

    2016-01-01

    Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.

  15. Statistical transformation and the interpretation of inpatient glucose control data from the intensive care unit.

    PubMed

    Saulnier, George E; Castro, Janna C; Cook, Curtiss B

    2014-05-01

    Glucose control can be problematic in critically ill patients. We evaluated the impact of statistical transformation on interpretation of intensive care unit inpatient glucose control data. Point-of-care blood glucose (POC-BG) data derived from patients in the intensive care unit for 2011 was obtained. Box-Cox transformation of POC-BG measurements was performed, and distribution of data was determined before and after transformation. Different data subsets were used to establish statistical upper and lower control limits. Exponentially weighted moving average (EWMA) control charts constructed from April, October, and November data determined whether out-of-control events could be identified differently in transformed versus nontransformed data. A total of 8679 POC-BG values were analyzed. POC-BG distributions in nontransformed data were skewed but approached normality after transformation. EWMA control charts revealed differences in projected detection of out-of-control events. In April, an out-of-control process resulting in the lower control limit being exceeded was identified at sample 116 in nontransformed data but not in transformed data. October transformed data detected an out-of-control process exceeding the upper control limit at sample 27 that was not detected in nontransformed data. Nontransformed November results remained in control, but transformation identified an out-of-control event less than 10 samples into the observation period. Using statistical methods to assess population-based glucose control in the intensive care unit could alter conclusions about the effectiveness of care processes for managing hyperglycemia. Further study is required to determine whether transformed versus nontransformed data change clinical decisions about the interpretation of care or intervention results. © 2014 Diabetes Technology Society.

  16. Statistical Transformation and the Interpretation of Inpatient Glucose Control Data From the Intensive Care Unit

    PubMed Central

    Saulnier, George E.; Castro, Janna C.

    2014-01-01

    Glucose control can be problematic in critically ill patients. We evaluated the impact of statistical transformation on interpretation of intensive care unit inpatient glucose control data. Point-of-care blood glucose (POC-BG) data derived from patients in the intensive care unit for 2011 was obtained. Box–Cox transformation of POC-BG measurements was performed, and distribution of data was determined before and after transformation. Different data subsets were used to establish statistical upper and lower control limits. Exponentially weighted moving average (EWMA) control charts constructed from April, October, and November data determined whether out-of-control events could be identified differently in transformed versus nontransformed data. A total of 8679 POC-BG values were analyzed. POC-BG distributions in nontransformed data were skewed but approached normality after transformation. EWMA control charts revealed differences in projected detection of out-of-control events. In April, an out-of-control process resulting in the lower control limit being exceeded was identified at sample 116 in nontransformed data but not in transformed data. October transformed data detected an out-of-control process exceeding the upper control limit at sample 27 that was not detected in nontransformed data. Nontransformed November results remained in control, but transformation identified an out-of-control event less than 10 samples into the observation period. Using statistical methods to assess population-based glucose control in the intensive care unit could alter conclusions about the effectiveness of care processes for managing hyperglycemia. Further study is required to determine whether transformed versus nontransformed data change clinical decisions about the interpretation of care or intervention results. PMID:24876620

  17. Application of machine learning and expert systems to Statistical Process Control (SPC) chart interpretation

    NASA Technical Reports Server (NTRS)

    Shewhart, Mark

    1991-01-01

    Statistical Process Control (SPC) charts are one of several tools used in quality control. Other tools include flow charts, histograms, cause and effect diagrams, check sheets, Pareto diagrams, graphs, and scatter diagrams. A control chart is simply a graph which indicates process variation over time. The purpose of drawing a control chart is to detect any changes in the process signalled by abnormal points or patterns on the graph. The Artificial Intelligence Support Center (AISC) of the Acquisition Logistics Division has developed a hybrid machine learning expert system prototype which automates the process of constructing and interpreting control charts.

  18. SLAR image interpretation keys for geographic analysis

    NASA Technical Reports Server (NTRS)

    Coiner, J. C.

    1972-01-01

    A means for side-looking airborne radar (SLAR) imagery to become a more widely used data source in geoscience and agriculture is suggested by providing interpretation keys as an easily implemented interpretation model. Interpretation problems faced by the researcher wishing to employ SLAR are specifically described, and the use of various types of image interpretation keys to overcome these problems is suggested. With examples drawn from agriculture and vegetation mapping, direct and associate dichotomous image interpretation keys are discussed and methods of constructing keys are outlined. Initial testing of the keys, key-based automated decision rules, and the role of the keys in an information system for agriculture are developed.

  19. Evaluating Structural Equation Models for Categorical Outcomes: A New Test Statistic and a Practical Challenge of Interpretation.

    PubMed

    Monroe, Scott; Cai, Li

    2015-01-01

    This research is concerned with two topics in assessing model fit for categorical data analysis. The first topic involves the application of a limited-information overall test, introduced in the item response theory literature, to structural equation modeling (SEM) of categorical outcome variables. Most popular SEM test statistics assess how well the model reproduces estimated polychoric correlations. In contrast, limited-information test statistics assess how well the underlying categorical data are reproduced. Here, the recently introduced C2 statistic of Cai and Monroe (2014) is applied. The second topic concerns how the root mean square error of approximation (RMSEA) fit index can be affected by the number of categories in the outcome variable. This relationship creates challenges for interpreting RMSEA. While the two topics initially appear unrelated, they may conveniently be studied in tandem since RMSEA is based on an overall test statistic, such as C2. The results are illustrated with an empirical application to data from a large-scale educational survey.

  20. Is the statistic value all we should care about in neuroimaging?

    PubMed

    Chen, Gang; Taylor, Paul A; Cox, Robert W

    2017-02-15

    Here we address an important issue that has been embedded within the neuroimaging community for a long time: the absence of effect estimates in results reporting in the literature. The statistic value itself, as a dimensionless measure, does not provide information on the biophysical interpretation of a study, and it certainly does not represent the whole picture of a study. Unfortunately, in contrast to standard practice in most scientific fields, effect (or amplitude) estimates are usually not provided in most results reporting in the current neuroimaging publications and presentations. Possible reasons underlying this general trend include (1) lack of general awareness, (2) software limitations, (3) inaccurate estimation of the BOLD response, and (4) poor modeling due to our relatively limited understanding of FMRI signal components. However, as we discuss here, such reporting damages the reliability and interpretability of the scientific findings themselves, and there is in fact no overwhelming reason for such a practice to persist. In order to promote meaningful interpretation, cross validation, reproducibility, meta and power analyses in neuroimaging, we strongly suggest that, as part of good scientific practice, effect estimates should be reported together with their corresponding statistic values. We provide several easily adaptable recommendations for facilitating this process. Published by Elsevier Inc.

  1. STATISTICAL ANALYSIS OF SPECTROPHOTOMETRIC DETERMINATIONS OF BORON; Estudo Estatistico de Determinacoes Espectrofotometricas de Boro

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lima, F.W.; Pagano, C.; Schneiderman, B.

    1959-07-01

    Boron can be determined quantitatively by absorption spectrophotometry of solutions of the red compound formed by the reaction of boric acid with curcumin. This reaction is affected by various factors, some of which can be detected easily in the data interpretation. Others, however, provide more difficulty. The application of modern statistical method to the study of the influence of these factors on the quantitative determination of boron is presented. These methods provide objective ways of establishing significant effects of the factors involved. (auth)

  2. Statistical methods of fracture characterization using acoustic borehole televiewer log interpretation

    NASA Astrophysics Data System (ADS)

    Massiot, Cécile; Townend, John; Nicol, Andrew; McNamara, David D.

    2017-08-01

    Acoustic borehole televiewer (BHTV) logs provide measurements of fracture attributes (orientations, thickness, and spacing) at depth. Orientation, censoring, and truncation sampling biases similar to those described for one-dimensional outcrop scanlines, and other logging or drilling artifacts specific to BHTV logs, can affect the interpretation of fracture attributes from BHTV logs. K-means, fuzzy K-means, and agglomerative clustering methods provide transparent means of separating fracture groups on the basis of their orientation. Fracture spacing is calculated for each of these fracture sets. Maximum likelihood estimation using truncated distributions permits the fitting of several probability distributions to the fracture attribute data sets within truncation limits, which can then be extrapolated over the entire range where they naturally occur. Akaike Information Criterion (AIC) and Schwartz Bayesian Criterion (SBC) statistical information criteria rank the distributions by how well they fit the data. We demonstrate these attribute analysis methods with a data set derived from three BHTV logs acquired from the high-temperature Rotokawa geothermal field, New Zealand. Varying BHTV log quality reduces the number of input data points, but careful selection of the quality levels where fractures are deemed fully sampled increases the reliability of the analysis. Spacing data analysis comprising up to 300 data points and spanning three orders of magnitude can be approximated similarly well (similar AIC rankings) with several distributions. Several clustering configurations and probability distributions can often characterize the data at similar levels of statistical criteria. Thus, several scenarios should be considered when using BHTV log data to constrain numerical fracture models.

  3. Attitudes to and implementation of video interpretation in a Danish hospital: A cross-sectional study.

    PubMed

    Mottelson, Ida Nygaard; Sodemann, Morten; Nielsen, Dorthe Susanne

    2018-03-01

    Immigrants, refugees, and their descendants comprise 12% of Denmark's population. Some of these people do not speak or understand Danish well enough to communicate with the staff in a healthcare setting and therefore need interpreter services. Interpretation through video conferencing equipment (video interpretation) is frequently used and creates a forum where the interpreter is not physically present in the medical consultation. The aim of this study was to investigate the attitudes to and experiences with video interpretation among charge nurses in a Danish university hospital. An electronic questionnaire was sent to 99 charge nurses. The questionnaire comprised both closed and open-ended questions. The answers were analysed using descriptive statistics and thematic text condensation. Of the 99 charge nurses, 78 (79%) completed the questionnaire. Most charge nurses, 21 (91%) of the daily/monthly users, and 21 (72%) of the monthly/yearly users, said that video interpretation increased the quality of their conversations with patients. A total of 19 (24%) departments had not used video interpretation within the last 12 months. The more the charge nurses used video interpretation, the more satisfied they were. Most of the charge nurses using video interpretation expressed satisfaction with the technology and found it easy to use. Some charge nurses are still content to allow family or friends to interpret. To reach its full potential, video interpretation technology has to be reliable and easily accessible for any consultation, including at the bedside.

  4. Uses and Misuses of Student Evaluations of Teaching: The Interpretation of Differences in Teaching Evaluation Means Irrespective of Statistical Information

    ERIC Educational Resources Information Center

    Boysen, Guy A.

    2015-01-01

    Student evaluations of teaching are among the most accepted and important indicators of college teachers' performance. However, faculty and administrators can overinterpret small variations in mean teaching evaluations. The current research examined the effect of including statistical information on the interpretation of teaching evaluations.…

  5. Interpretation of commonly used statistical regression models.

    PubMed

    Kasza, Jessica; Wolfe, Rory

    2014-01-01

    A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  6. Easily-wired toggle switch

    NASA Technical Reports Server (NTRS)

    Dean, W. T.; Stringer, E. J.

    1979-01-01

    Crimp-type connectors reduce assembly and disassembly time. With design, no switch preparation is necessary and socket contracts are crimped to wires inserted in module attached to back of toggle switch engaging pins inside module to make electrical connections. Wires are easily removed with standard detachment tool. Design can accommodate wires of any gage and as many terminals can be placed on switch as wire gage and switch dimensions will allow.

  7. Bayesian statistical inference enhances the interpretation of contemporary randomized controlled trials.

    PubMed

    Wijeysundera, Duminda N; Austin, Peter C; Hux, Janet E; Beattie, W Scott; Laupacis, Andreas

    2009-01-01

    Randomized trials generally use "frequentist" statistics based on P-values and 95% confidence intervals. Frequentist methods have limitations that might be overcome, in part, by Bayesian inference. To illustrate these advantages, we re-analyzed randomized trials published in four general medical journals during 2004. We used Medline to identify randomized superiority trials with two parallel arms, individual-level randomization and dichotomous or time-to-event primary outcomes. Studies with P<0.05 in favor of the intervention were deemed "positive"; otherwise, they were "negative." We used several prior distributions and exact conjugate analyses to calculate Bayesian posterior probabilities for clinically relevant effects. Of 88 included studies, 39 were positive using a frequentist analysis. Although the Bayesian posterior probabilities of any benefit (relative risk or hazard ratio<1) were high in positive studies, these probabilities were lower and variable for larger benefits. The positive studies had only moderate probabilities for exceeding the effects that were assumed for calculating the sample size. By comparison, there were moderate probabilities of any benefit in negative studies. Bayesian and frequentist analyses complement each other when interpreting the results of randomized trials. Future reports of randomized trials should include both.

  8. A Critique of Divorce Statistics and Their Interpretation.

    ERIC Educational Resources Information Center

    Crosby, John F.

    1980-01-01

    Increasingly, appeals to the divorce statistic are employed to substantiate claims that the family is in a state of breakdown and marriage is passe. This article contains a consideration of reasons why the divorce statistics are invalid and/or unreliable as indicators of the present state of marriage and family. (Author)

  9. Biomembrane Permeabilization: Statistics of Individual Leakage Events Harmonize the Interpretation of Vesicle Leakage.

    PubMed

    Braun, Stefan; Pokorná, Šárka; Šachl, Radek; Hof, Martin; Heerklotz, Heiko; Hoernke, Maria

    2018-01-23

    The mode of action of membrane-active molecules, such as antimicrobial, anticancer, cell penetrating, and fusion peptides and their synthetic mimics, transfection agents, drug permeation enhancers, and biological signaling molecules (e.g., quorum sensing), involves either the general or local destabilization of the target membrane or the formation of defined, rather stable pores. Some effects aim at killing the cell, while others need to be limited in space and time to avoid serious damage. Biological tests reveal translocation of compounds and cell death but do not provide a detailed, mechanistic, and quantitative understanding of the modes of action and their molecular basis. Model membrane studies of membrane leakage have been used for decades to tackle this issue, but their interpretation in terms of biology has remained challenging and often quite limited. Here we compare two recent, powerful protocols to study model membrane leakage: the microscopic detection of dye influx into giant liposomes and time-correlated single photon counting experiments to characterize dye efflux from large unilamellar vesicles. A statistical treatment of both data sets does not only harmonize apparent discrepancies but also makes us aware of principal issues that have been confusing the interpretation of model membrane leakage data so far. Moreover, our study reveals a fundamental difference between nano- and microscale systems that needs to be taken into account when conclusions about microscale objects, such as cells, are drawn from nanoscale models.

  10. Common pitfalls in statistical analysis: Clinical versus statistical significance

    PubMed Central

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In clinical research, study results, which are statistically significant are often interpreted as being clinically important. While statistical significance indicates the reliability of the study results, clinical significance reflects its impact on clinical practice. The third article in this series exploring pitfalls in statistical analysis clarifies the importance of differentiating between statistical significance and clinical significance. PMID:26229754

  11. Assessing compositional variability through graphical analysis and Bayesian statistical approaches: case studies on transgenic crops.

    PubMed

    Harrigan, George G; Harrison, Jay M

    2012-01-01

    New transgenic (GM) crops are subjected to extensive safety assessments that include compositional comparisons with conventional counterparts as a cornerstone of the process. The influence of germplasm, location, environment, and agronomic treatments on compositional variability is, however, often obscured in these pair-wise comparisons. Furthermore, classical statistical significance testing can often provide an incomplete and over-simplified summary of highly responsive variables such as crop composition. In order to more clearly describe the influence of the numerous sources of compositional variation we present an introduction to two alternative but complementary approaches to data analysis and interpretation. These include i) exploratory data analysis (EDA) with its emphasis on visualization and graphics-based approaches and ii) Bayesian statistical methodology that provides easily interpretable and meaningful evaluations of data in terms of probability distributions. The EDA case-studies include analyses of herbicide-tolerant GM soybean and insect-protected GM maize and soybean. Bayesian approaches are presented in an analysis of herbicide-tolerant GM soybean. Advantages of these approaches over classical frequentist significance testing include the more direct interpretation of results in terms of probabilities pertaining to quantities of interest and no confusion over the application of corrections for multiple comparisons. It is concluded that a standardized framework for these methodologies could provide specific advantages through enhanced clarity of presentation and interpretation in comparative assessments of crop composition.

  12. Quasi-probabilities in conditioned quantum measurement and a geometric/statistical interpretation of Aharonov's weak value

    NASA Astrophysics Data System (ADS)

    Lee, Jaeha; Tsutsui, Izumi

    2017-05-01

    We show that the joint behavior of an arbitrary pair of (generally noncommuting) quantum observables can be described by quasi-probabilities, which are an extended version of the standard probabilities used for describing the outcome of measurement for a single observable. The physical situations that require these quasi-probabilities arise when one considers quantum measurement of an observable conditioned by some other variable, with the notable example being the weak measurement employed to obtain Aharonov's weak value. Specifically, we present a general prescription for the construction of quasi-joint probability (QJP) distributions associated with a given combination of observables. These QJP distributions are introduced in two complementary approaches: one from a bottom-up, strictly operational construction realized by examining the mathematical framework of the conditioned measurement scheme, and the other from a top-down viewpoint realized by applying the results of the spectral theorem for normal operators and their Fourier transforms. It is then revealed that, for a pair of simultaneously measurable observables, the QJP distribution reduces to the unique standard joint probability distribution of the pair, whereas for a noncommuting pair there exists an inherent indefiniteness in the choice of such QJP distributions, admitting a multitude of candidates that may equally be used for describing the joint behavior of the pair. In the course of our argument, we find that the QJP distributions furnish the space of operators in the underlying Hilbert space with their characteristic geometric structures such that the orthogonal projections and inner products of observables can be given statistical interpretations as, respectively, “conditionings” and “correlations”. The weak value Aw for an observable A is then given a geometric/statistical interpretation as either the orthogonal projection of A onto the subspace generated by another observable B, or

  13. Easily Installable Wireless Behavioral Monitoring System with Electric Field Sensor for Ordinary Houses

    PubMed Central

    Tsukamoto, S; Hoshino, H; Tamura, T

    2008-01-01

    This paper describes an indoor behavioral monitoring system for improving the quality of life in ordinary houses. It employs a device that uses weak radio waves for transmitting the obtained data and it is designed such that it can be installed by a user without requiring any technical knowledge or extra constructions. This study focuses on determining the usage statistics of home electric appliances by using an electromagnetic field sensor as a detection device. The usage of the home appliances is determined by measuring the electromagnetic field that can be observed in an area near the appliance. It is assumed that these usage statistics could provide information regarding the indoor behavior of a subject. Since the sensor is not direction sensitive and does not require precise positioning and wiring, it can be easily installed in ordinary houses by the end users. For evaluating the practicability of the sensor unit, several simple tests have been performed. The results indicate that the proposed system could be useful for collecting the usage statistics of home appliances. PMID:19415135

  14. An Easily Constructed and Versatile Molecular Model

    NASA Astrophysics Data System (ADS)

    Hernandez, Sandra A.; Rodriguez, Nora M.; Quinzani, Oscar

    1996-08-01

    Three-dimensional molecular models are powerful tools used in basic courses of general and organic chemistry when the students must visualize the spatial distributions of atoms in molecules and relate them to the physical and chemical properties of such molecules. This article discusses inexpensive, easily carried, and semipermanent molecular models that the students may build by themselves. These models are based upon two different types of arrays of thin flexible wires, like telephone hook-up wires, which may be bent easily but keep their shapes.

  15. Ergodic Theory, Interpretations of Probability and the Foundations of Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    van Lith, Janneke

    The traditional use of ergodic theory in the foundations of equilibrium statistical mechanics is that it provides a link between thermodynamic observables and microcanonical probabilities. First of all, the ergodic theorem demonstrates the equality of microcanonical phase averages and infinite time averages (albeit for a special class of systems, and up to a measure zero set of exceptions). Secondly, one argues that actual measurements of thermodynamic quantities yield time averaged quantities, since measurements take a long time. The combination of these two points is held to be an explanation why calculating microcanonical phase averages is a successful algorithm for predicting the values of thermodynamic observables. It is also well known that this account is problematic. This survey intends to show that ergodic theory nevertheless may have important roles to play, and it explores three other uses of ergodic theory. Particular attention is paid, firstly, to the relevance of specific interpretations of probability, and secondly, to the way in which the concern with systems in thermal equilibrium is translated into probabilistic language. With respect to the latter point, it is argued that equilibrium should not be represented as a stationary probability distribution as is standardly done; instead, a weaker definition is presented.

  16. Workplace Statistical Literacy for Teachers: Interpreting Box Plots

    ERIC Educational Resources Information Center

    Pierce, Robyn; Chick, Helen

    2013-01-01

    As a consequence of the increased use of data in workplace environments, there is a need to understand the demands that are placed on users to make sense of such data. In education, teachers are being increasingly expected to interpret and apply complex data about student and school performance, and, yet it is not clear that they always have the…

  17. Interpreting Association from Graphical Displays

    ERIC Educational Resources Information Center

    Fitzallen, Noleine

    2016-01-01

    Research that has explored students' interpretations of graphical representations has not extended to include how students apply understanding of particular statistical concepts related to one graphical representation to interpret different representations. This paper reports on the way in which students' understanding of covariation, evidenced…

  18. The Statistical Interpretation of Entropy: An Activity

    ERIC Educational Resources Information Center

    Timmberlake, Todd

    2010-01-01

    The second law of thermodynamics, which states that the entropy of an isolated macroscopic system can increase but will not decrease, is a cornerstone of modern physics. Ludwig Boltzmann argued that the second law arises from the motion of the atoms that compose the system. Boltzmann's statistical mechanics provides deep insight into the…

  19. Compositionality and Statistics in Adjective Acquisition: 4-Year-Olds Interpret "Tall" and "Short" Based on the Size Distributions of Novel Noun Referents

    ERIC Educational Resources Information Center

    Barner, David; Snedeker, Jesse

    2008-01-01

    Four experiments investigated 4-year-olds' understanding of adjective-noun compositionality and their sensitivity to statistics when interpreting scalar adjectives. In Experiments 1 and 2, children selected "tall" and "short" items from 9 novel objects called "pimwits" (1-9 in. in height) or from this array plus 4 taller or shorter distractor…

  20. Improving interpretation of publically reported statistics on health and healthcare: the Figure Interpretation Assessment Tool (FIAT-Health).

    PubMed

    Gerrits, Reinie G; Kringos, Dionne S; van den Berg, Michael J; Klazinga, Niek S

    2018-03-07

    Policy-makers, managers, scientists, patients and the general public are confronted daily with figures on health and healthcare through public reporting in newspapers, webpages and press releases. However, information on the key characteristics of these figures necessary for their correct interpretation is often not adequately communicated, which can lead to misinterpretation and misinformed decision-making. The objective of this research was to map the key characteristics relevant to the interpretation of figures on health and healthcare, and to develop a Figure Interpretation Assessment Tool-Health (FIAT-Health) through which figures on health and healthcare can be systematically assessed, allowing for a better interpretation of these figures. The abovementioned key characteristics of figures on health and healthcare were identified through systematic expert consultations in the Netherlands on four topic categories of figures, namely morbidity, healthcare expenditure, healthcare outcomes and lifestyle. The identified characteristics were used as a frame for the development of the FIAT-Health. Development of the tool and its content was supported and validated through regular review by a sounding board of potential users. Identified characteristics relevant for the interpretation of figures in the four categories relate to the figures' origin, credibility, expression, subject matter, population and geographical focus, time period, and underlying data collection methods. The characteristics were translated into a set of 13 dichotomous and 4-point Likert scale questions constituting the FIAT-Health, and two final assessment statements. Users of the FIAT-Health were provided with a summary overview of their answers to support a final assessment of the correctness of a figure and the appropriateness of its reporting. FIAT-Health can support policy-makers, managers, scientists, patients and the general public to systematically assess the quality of publicly reported

  1. Statistics and Data Interpretation for Social Work

    ERIC Educational Resources Information Center

    Rosenthal, James A.

    2011-01-01

    Written by a social worker for social work students, this is a nuts and bolts guide to statistics that presents complex calculations and concepts in clear, easy-to-understand language. It includes numerous examples, data sets, and issues that students will encounter in social work practice. The first section introduces basic concepts and terms to…

  2. Evaluation of a statistics-based Ames mutagenicity QSAR model and interpretation of the results obtained.

    PubMed

    Barber, Chris; Cayley, Alex; Hanser, Thierry; Harding, Alex; Heghes, Crina; Vessey, Jonathan D; Werner, Stephane; Weiner, Sandy K; Wichard, Joerg; Giddings, Amanda; Glowienke, Susanne; Parenty, Alexis; Brigo, Alessandro; Spirkl, Hans-Peter; Amberg, Alexander; Kemper, Ray; Greene, Nigel

    2016-04-01

    The relative wealth of bacterial mutagenicity data available in the public literature means that in silico quantitative/qualitative structure activity relationship (QSAR) systems can readily be built for this endpoint. A good means of evaluating the performance of such systems is to use private unpublished data sets, which generally represent a more distinct chemical space than publicly available test sets and, as a result, provide a greater challenge to the model. However, raw performance metrics should not be the only factor considered when judging this type of software since expert interpretation of the results obtained may allow for further improvements in predictivity. Enough information should be provided by a QSAR to allow the user to make general, scientifically-based arguments in order to assess and overrule predictions when necessary. With all this in mind, we sought to validate the performance of the statistics-based in vitro bacterial mutagenicity prediction system Sarah Nexus (version 1.1) against private test data sets supplied by nine different pharmaceutical companies. The results of these evaluations were then analysed in order to identify findings presented by the model which would be useful for the user to take into consideration when interpreting the results and making their final decision about the mutagenic potential of a given compound. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Issues affecting the interpretation of eastern hardwood resource statistics

    Treesearch

    William G. Luppold; William H. McWilliams

    2000-01-01

    Forest inventory statistics developed by the USDA Forest Service are used by customers ranging from forest industry to state and local economic development groups. In recent years, these statistics have been used increasingly to justify greater utilization of the eastem hardwood resource or to evaluate the sustainability of expanding demand for hardwood roundwood and...

  4. The use of easily debondable orthodontic adhesives with ceramic brackets.

    PubMed

    Ryu, Chiyako; Namura, Yasuhiro; Tsuruoka, Takashi; Hama, Tomohiko; Kaji, Kaori; Shimizu, Noriyoshi

    2011-01-01

    We experimentally produced an easily debondable orthodontic adhesive (EDA) containing heat-expandable microcapsules. The purpose of this in vitro study was to evaluate the best debondable condition when EDA was used for ceramic brackets. Shear bond strengths were measured before and after heating and were compared statistically. Temperatures of the bracket base and pulp wall were also examined during heating. Bond strengths of EDA containing 30 wt% and 40 wt% heat-expandable microcapsules were 13.4 and 12.9 MPa, respectively and decreased significantly to 3.8 and 3.7 MPa, respectively, after heating. The temperature of the pulp wall increased 1.8-3.6°C after heating, less than that required to induce pulp damage. Based on the results, we conclude that heating for 8 s during debonding of ceramic brackets bonded using EDA containing 40 wt% heat-expandable microcapsules is the most effective and safest method for the enamel and pulp.

  5. Quantitative Seismic Interpretation: Applying Rock Physics Tools to Reduce Interpretation Risk

    NASA Astrophysics Data System (ADS)

    Sondergeld, Carl H.

    This book is divided into seven chapters that cover rock physics, statistical rock physics, seismic inversion techniques, case studies, and work flows. On balance, the emphasis is on rock physics. Included are 56 color figures that greatly help in the interpretation of more complicated plots and displays.The domain of rock physics falls between petrophysics and seismics. It is the basis for interpreting seismic observations and therefore is pivotal to the understanding of this book. The first two chapters are dedicated to this topic (109 pages).

  6. Statistical Significance Testing from Three Perspectives and Interpreting Statistical Significance and Nonsignificance and the Role of Statistics in Research.

    ERIC Educational Resources Information Center

    Levin, Joel R.; And Others

    1993-01-01

    Journal editors respond to criticisms of reliance on statistical significance in research reporting. Joel R. Levin ("Journal of Educational Psychology") defends its use, whereas William D. Schafer ("Measurement and Evaluation in Counseling and Development") emphasizes the distinction between statistically significant and important. William Asher…

  7. Geovisual analytics to enhance spatial scan statistic interpretation: an analysis of U.S. cervical cancer mortality.

    PubMed

    Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; Maceachren, Alan M

    2008-11-07

    Kulldorff's spatial scan statistic and its software implementation - SaTScan - are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of

  8. Geovisual analytics to enhance spatial scan statistic interpretation: an analysis of U.S. cervical cancer mortality

    PubMed Central

    Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; MacEachren, Alan M

    2008-01-01

    Background Kulldorff's spatial scan statistic and its software implementation – SaTScan – are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. Results We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed

  9. On Improving the Quality and Interpretation of Environmental Assessments using Statistical Analysis and Geographic Information Systems

    NASA Astrophysics Data System (ADS)

    Karuppiah, R.; Faldi, A.; Laurenzi, I.; Usadi, A.; Venkatesh, A.

    2014-12-01

    An increasing number of studies are focused on assessing the environmental footprint of different products and processes, especially using life cycle assessment (LCA). This work shows how combining statistical methods and Geographic Information Systems (GIS) with environmental analyses can help improve the quality of results and their interpretation. Most environmental assessments in literature yield single numbers that characterize the environmental impact of a process/product - typically global or country averages, often unchanging in time. In this work, we show how statistical analysis and GIS can help address these limitations. For example, we demonstrate a method to separately quantify uncertainty and variability in the result of LCA models using a power generation case study. This is important for rigorous comparisons between the impacts of different processes. Another challenge is lack of data that can affect the rigor of LCAs. We have developed an approach to estimate environmental impacts of incompletely characterized processes using predictive statistical models. This method is applied to estimate unreported coal power plant emissions in several world regions. There is also a general lack of spatio-temporal characterization of the results in environmental analyses. For instance, studies that focus on water usage do not put in context where and when water is withdrawn. Through the use of hydrological modeling combined with GIS, we quantify water stress on a regional and seasonal basis to understand water supply and demand risks for multiple users. Another example where it is important to consider regional dependency of impacts is when characterizing how agricultural land occupation affects biodiversity in a region. We developed a data-driven methodology used in conjuction with GIS to determine if there is a statistically significant difference between the impacts of growing different crops on different species in various biomes of the world.

  10. Voltage controlled oscillator is easily aligned, has low phase noise

    NASA Technical Reports Server (NTRS)

    Sydnor, R. L.

    1965-01-01

    Voltage Controlled Oscillator /VCO/, represented by an equivalent RF circuit, is easily adjusted for optimum performance by varying the circuit parameter. It contains a crystal drive level which is also easily adjusted to obtain minimum phase noise.

  11. Interpreting the gamma statistic in phylogenetic diversification rate studies: a rate decrease does not necessarily indicate an early burst.

    PubMed

    Fordyce, James A

    2010-07-23

    Phylogenetic hypotheses are increasingly being used to elucidate historical patterns of diversification rate-variation. Hypothesis testing is often conducted by comparing the observed vector of branching times to a null, pure-birth expectation. A popular method for inferring a decrease in speciation rate, which might suggest an early burst of diversification followed by a decrease in diversification rate is the gamma statistic. Using simulations under varying conditions, I examine the sensitivity of gamma to the distribution of the most recent branching times. Using an exploratory data analysis tool for lineages through time plots, tree deviation, I identified trees with a significant gamma statistic that do not appear to have the characteristic early accumulation of lineages consistent with an early, rapid rate of cladogenesis. I further investigated the sensitivity of the gamma statistic to recent diversification by examining the consequences of failing to simulate the full time interval following the most recent cladogenic event. The power of gamma to detect rate decrease at varying times was assessed for simulated trees with an initial high rate of diversification followed by a relatively low rate. The gamma statistic is extraordinarily sensitive to recent diversification rates, and does not necessarily detect early bursts of diversification. This was true for trees of various sizes and completeness of taxon sampling. The gamma statistic had greater power to detect recent diversification rate decreases compared to early bursts of diversification. Caution should be exercised when interpreting the gamma statistic as an indication of early, rapid diversification.

  12. Interpretation of correlations in clinical research.

    PubMed

    Hung, Man; Bounsanga, Jerry; Voss, Maren Wright

    2017-11-01

    Critically analyzing research is a key skill in evidence-based practice and requires knowledge of research methods, results interpretation, and applications, all of which rely on a foundation based in statistics. Evidence-based practice makes high demands on trained medical professionals to interpret an ever-expanding array of research evidence. As clinical training emphasizes medical care rather than statistics, it is useful to review the basics of statistical methods and what they mean for interpreting clinical studies. We reviewed the basic concepts of correlational associations, violations of normality, unobserved variable bias, sample size, and alpha inflation. The foundations of causal inference were discussed and sound statistical analyses were examined. We discuss four ways in which correlational analysis is misused, including causal inference overreach, over-reliance on significance, alpha inflation, and sample size bias. Recent published studies in the medical field provide evidence of causal assertion overreach drawn from correlational findings. The findings present a primer on the assumptions and nature of correlational methods of analysis and urge clinicians to exercise appropriate caution as they critically analyze the evidence before them and evaluate evidence that supports practice. Critically analyzing new evidence requires statistical knowledge in addition to clinical knowledge. Studies can overstate relationships, expressing causal assertions when only correlational evidence is available. Failure to account for the effect of sample size in the analyses tends to overstate the importance of predictive variables. It is important not to overemphasize the statistical significance without consideration of effect size and whether differences could be considered clinically meaningful.

  13. Statistical Interpretation of the Local Field Inside Dielectrics.

    ERIC Educational Resources Information Center

    Berrera, Ruben G.; Mello, P. A.

    1982-01-01

    Compares several derivations of the Clausius-Mossotti relation to analyze consistently the nature of approximations used and their range of applicability. Also presents a statistical-mechanical calculation of the local field for classical system of harmonic oscillators interacting via the Coulomb potential. (Author/SK)

  14. Customizable tool for ecological data entry, assessment, monitoring, and interpretation

    USDA-ARS?s Scientific Manuscript database

    The Database for Inventory, Monitoring and Assessment (DIMA) is a highly customizable tool for data entry, assessment, monitoring, and interpretation. DIMA is a Microsoft Access database that can easily be used without Access knowledge and is available at no cost. Data can be entered for common, nat...

  15. Statistical Interpretation of Natural and Technological Hazards in China

    NASA Astrophysics Data System (ADS)

    Borthwick, Alistair, ,, Prof.; Ni, Jinren, ,, Prof.

    2010-05-01

    China is prone to catastrophic natural hazards from floods, droughts, earthquakes, storms, cyclones, landslides, epidemics, extreme temperatures, forest fires, avalanches, and even tsunami. This paper will list statistics related to the six worst natural disasters in China over the past 100 or so years, ranked according to number of fatalities. The corresponding data for the six worst natural disasters in China over the past decade will also be considered. [The data are abstracted from the International Disaster Database, Centre for Research on the Epidemiology of Disasters (CRED), Université Catholique de Louvain, Brussels, Belgium, http://www.cred.be/ where a disaster is defined as occurring if one of the following criteria is fulfilled: 10 or more people reported killed; 100 or more people reported affected; a call for international assistance; or declaration of a state of emergency.] The statistics include the number of occurrences of each type of natural disaster, the number of deaths, the number of people affected, and the cost in billions of US dollars. Over the past hundred years, the largest disasters may be related to the overabundance or scarcity of water, and to earthquake damage. However, there has been a substantial relative reduction in fatalities due to water related disasters over the past decade, even though the overall numbers of people affected remain huge, as does the economic damage. This change is largely due to the efforts put in by China's water authorities to establish effective early warning systems, the construction of engineering countermeasures for flood protection, the implementation of water pricing and other measures for reducing excessive consumption during times of drought. It should be noted that the dreadful death toll due to the Sichuan Earthquake dominates recent data. Joint research has been undertaken between the Department of Environmental Engineering at Peking University and the Department of Engineering Science at Oxford

  16. Easily degradable carbon - an indicator of microbial hotspots and soil degradation

    NASA Astrophysics Data System (ADS)

    Wolińska, Agnieszka; Banach, Artur; Szafranek-Nakonieczna, Anna; Stępniewska, Zofia; Błaszczyk, Mieczysław

    2018-01-01

    The effect of arable soil was quantified against non-cultivated soil on easily degradable carbon and other selected microbiological factors, i.e. soil microbial biomass, respiration activity, and dehydrogenase activity. The intent was to ascertain whether easily degradable carbo can be useful as a sensitive indicator of both soil biological degradation and microbial hot-spots indication. As a result, it was found that soil respiration activity was significantly higher (p <0.0001) in all controls, ranging between 30-60 vs. 11.5-23.7 μmol CO2 kg d.m.-1 h-1 for the arable soils. Dehydrogenase activity was significantly lower in the arable soil (down to 35-40% of the control values, p <0.001) varying depending on the soil type. The microbial biomass was also significantly higher at the non-cultivated soil (512-2807 vs. 416-1429 µg g-1 d.m., p <0.001), while easily degradable carbon ranged between 620-1209 mg kg-1 non-cultivated soil and 497-877 mg kg-1 arable soil (p <0.0001). It was demonstrated that agricultural practices affected soil properties by significantly reducing the levels of the studied parameters in relation to the control soils. The significant correlations of easily degradable carbon-respiration activity (ρ = 0.77*), easily degradable carbon-dehydrogenase activity (ρ = 0.42*), and easily degradable carbon-microbial biomass (ρ = 0.53*) reveal that easily degradable carbon is a novel, suitable factor indicative of soil biological degradation. It, therefore, could be used for evaluating the degree of soil degradation and for choosing a proper management procedure.

  17. Applied statistics in ecology: common pitfalls and simple solutions

    Treesearch

    E. Ashley Steel; Maureen C. Kennedy; Patrick G. Cunningham; John S. Stanovick

    2013-01-01

    The most common statistical pitfalls in ecological research are those associated with data exploration, the logic of sampling and design, and the interpretation of statistical results. Although one can find published errors in calculations, the majority of statistical pitfalls result from incorrect logic or interpretation despite correct numerical calculations. There...

  18. Interpreting wireline measurements in coal beds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, D.J.

    1991-06-01

    When logging coal seams with wireline tools, the interpretation method needed to evaluate the coals is different from that used for conventional oil and gas reservoirs. Wireline logs identify coals easily. For an evaluation, the contribution of each coal component on the raw measurements must be considered. This paper will discuss how each log measurement is affected by each component. The components of a coal will be identified as the mineral matter, macerals, moisture content, rank, gas content, and cleat porosity. The measurements illustrated are from the resistivity, litho-density, neutron, sonic, dielectric, and geochemical tools. Once the coal component effectsmore » have been determined, an interpretation of the logs can be made. This paper will illustrate how to use these corrected logs in a coal evaluation.« less

  19. Statistical mechanics of influence maximization with thermal noise

    NASA Astrophysics Data System (ADS)

    Lynn, Christopher W.; Lee, Daniel D.

    2017-03-01

    The problem of optimally distributing a budget of influence among individuals in a social network, known as influence maximization, has typically been studied in the context of contagion models and deterministic processes, which fail to capture stochastic interactions inherent in real-world settings. Here, we show that by introducing thermal noise into influence models, the dynamics exactly resemble spins in a heterogeneous Ising system. In this way, influence maximization in the presence of thermal noise has a natural physical interpretation as maximizing the magnetization of an Ising system given a budget of external magnetic field. Using this statistical mechanical formulation, we demonstrate analytically that for small external-field budgets, the optimal influence solutions exhibit a highly non-trivial temperature dependence, focusing on high-degree hub nodes at high temperatures and on easily influenced peripheral nodes at low temperatures. For the general problem, we present a projected gradient ascent algorithm that uses the magnetic susceptibility to calculate locally optimal external-field distributions. We apply our algorithm to synthetic and real-world networks, demonstrating that our analytic results generalize qualitatively. Our work establishes a fruitful connection with statistical mechanics and demonstrates that influence maximization depends crucially on the temperature of the system, a fact that has not been appreciated by existing research.

  20. Using Statistical Mechanics and Entropy Principles to Interpret Variability in Power Law Models of the Streamflow Recession

    NASA Astrophysics Data System (ADS)

    Dralle, D.; Karst, N.; Thompson, S. E.

    2015-12-01

    Multiple competing theories suggest that power law behavior governs the observed first-order dynamics of streamflow recessions - the important process by which catchments dry-out via the stream network, altering the availability of surface water resources and in-stream habitat. Frequently modeled as: dq/dt = -aqb, recessions typically exhibit a high degree of variability, even within a single catchment, as revealed by significant shifts in the values of "a" and "b" across recession events. One potential source of this variability lies in underlying, hard-to-observe fluctuations in how catchment water storage is partitioned amongst distinct storage elements, each having different discharge behaviors. Testing this and competing hypotheses with widely available streamflow timeseries, however, has been hindered by a power law scaling artifact that obscures meaningful covariation between the recession parameters, "a" and "b". Here we briefly outline a technique that removes this artifact, revealing intriguing new patterns in the joint distribution of recession parameters. Using long-term flow data from catchments in Northern California, we explore temporal variations, and find that the "a" parameter varies strongly with catchment wetness. Then we explore how the "b" parameter changes with "a", and find that measures of its variation are maximized at intermediate "a" values. We propose an interpretation of this pattern based on statistical mechanics, meaning "b" can be viewed as an indicator of the catchment "microstate" - i.e. the partitioning of storage - and "a" as a measure of the catchment macrostate (i.e. the total storage). In statistical mechanics, entropy (i.e. microstate variance, that is the variance of "b") is maximized for intermediate values of extensive variables (i.e. wetness, "a"), as observed in the recession data. This interpretation of "a" and "b" was supported by model runs using a multiple-reservoir catchment toy model, and lends support to the

  1. Electronic modules easily separated from heat sink

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Metal heat sink and electronic modules bonded to a thermal bridge can be easily cleaved for removal of the modules for replacement or repair. A thin film of grease between a fluorocarbon polymer film on the metal heat sink and an adhesive film on the modules acts as the cleavage plane.

  2. Variation in reaction norms: Statistical considerations and biological interpretation.

    PubMed

    Morrissey, Michael B; Liefting, Maartje

    2016-09-01

    Analysis of reaction norms, the functions by which the phenotype produced by a given genotype depends on the environment, is critical to studying many aspects of phenotypic evolution. Different techniques are available for quantifying different aspects of reaction norm variation. We examine what biological inferences can be drawn from some of the more readily applicable analyses for studying reaction norms. We adopt a strongly biologically motivated view, but draw on statistical theory to highlight strengths and drawbacks of different techniques. In particular, consideration of some formal statistical theory leads to revision of some recently, and forcefully, advocated opinions on reaction norm analysis. We clarify what simple analysis of the slope between mean phenotype in two environments can tell us about reaction norms, explore the conditions under which polynomial regression can provide robust inferences about reaction norm shape, and explore how different existing approaches may be used to draw inferences about variation in reaction norm shape. We show how mixed model-based approaches can provide more robust inferences than more commonly used multistep statistical approaches, and derive new metrics of the relative importance of variation in reaction norm intercepts, slopes, and curvatures. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  3. Statistical Reform in School Psychology Research: A Synthesis

    ERIC Educational Resources Information Center

    Swaminathan, Hariharan; Rogers, H. Jane

    2007-01-01

    Statistical reform in school psychology research is discussed in terms of research designs, measurement issues, statistical modeling and analysis procedures, interpretation and reporting of statistical results, and finally statistics education.

  4. Revolutionizing volunteer interpreter services: an evaluation of an innovative medical interpreter education program.

    PubMed

    Hasbún Avalos, Oswaldo; Pennington, Kaylin; Osterberg, Lars

    2013-12-01

    In our ever-increasingly multicultural, multilingual society, medical interpreters serve an important role in the provision of care. Though it is known that using untrained interpreters leads to decreased quality of care for limited English proficiency patients, because of a short supply of professionals and a lack of formalized, feasible education programs for volunteers, community health centers and internal medicine practices continue to rely on untrained interpreters. To develop and formally evaluate a novel medical interpreter education program that encompasses major tenets of interpretation, tailored to the needs of volunteer medical interpreters. One-armed, quasi-experimental retro-pre-post study using survey ratings and feedback correlated by assessment scores to determine educational intervention effects. Thirty-eight students; 24 Spanish, nine Mandarin, and five Vietnamese. The majority had prior interpreting experience but no formal medical interpreter training. Students completed retrospective pre-test and post-test surveys measuring confidence in and perceived knowledge of key skills of interpretation. Primary outcome measures were a 10-point Likert scale for survey questions of knowledge, skills, and confidence, written and oral assessments of interpreter skills, and qualitative evidence of newfound knowledge in written reflections. Analyses showed a statistically significant (P <0.001) change of about two points in mean self-ratings on knowledge, skills, and confidence, with large effect sizes (d > 0.8). The second half of the program was also quantitatively and qualitatively shown to be a vital learning experience, resulting in 18 % more students passing the oral assessments; a 19 % increase in mean scores for written assessments; and a newfound understanding of interpreter roles and ways to navigate them. This innovative program was successful in increasing volunteer interpreters' skills and knowledge of interpretation, as well as confidence

  5. Statistical Literacy as a Function of Online versus Hybrid Course Delivery Format for an Introductory Graduate Statistics Course

    ERIC Educational Resources Information Center

    Hahs-Vaughn, Debbie L.; Acquaye, Hannah; Griffith, Matthew D.; Jo, Hang; Matthews, Ken; Acharya, Parul

    2017-01-01

    Statistical literacy refers to understanding fundamental statistical concepts. Assessment of statistical literacy can take the forms of tasks that require students to identify, translate, compute, read, and interpret data. In addition, statistical instruction can take many forms encompassing course delivery format such as face-to-face, hybrid,…

  6. Statistics Clinic

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  7. Interpretation of Confidence Interval Facing the Conflict

    ERIC Educational Resources Information Center

    Andrade, Luisa; Fernández, Felipe

    2016-01-01

    As literature has reported, it is usual that university students in statistics courses, and even statistics teachers, interpret the confidence level associated with a confidence interval as the probability that the parameter value will be between the lower and upper interval limits. To confront this misconception, class activities have been…

  8. Statistical characteristics of MST radar echoes and its interpretation

    NASA Technical Reports Server (NTRS)

    Woodman, Ronald F.

    1989-01-01

    Two concepts of fundamental importance are reviewed: the autocorrelation function and the frequency power spectrum. In addition, some turbulence concepts, the relationship between radar signals and atmospheric medium statistics, partial reflection, and the characteristics of noise and clutter interference are discussed.

  9. Enhancing the Teaching of Statistics: Portfolio Theory, an Application of Statistics in Finance

    ERIC Educational Resources Information Center

    Christou, Nicolas

    2008-01-01

    In this paper we present an application of statistics using real stock market data. Most, if not all, students have some familiarity with the stock market (or at least they have heard about it) and therefore can understand the problem easily. It is the real data analysis that students find interesting. Here we explore the building of efficient…

  10. Can patients interpret health information? An assessment of the medical data interpretation test.

    PubMed

    Schwartz, Lisa M; Woloshin, Steven; Welch, H Gilbert

    2005-01-01

    To establish the reliability/validity of an 18-item test of patients' medical data interpretation skills. Survey with retest after 2 weeks. Subjects. 178 people recruited from advertisements in local newspapers, an outpatient clinic, and a hospital open house. The percentage of correct answers to individual items ranged from 20% to 87%, and medical data interpretation test scores (on a 0- 100 scale) were normally distributed (median 61.1, mean 61.0, range 6-94). Reliability was good (test-retest correlation=0.67, Cronbach's alpha=0.71). Construct validity was supported in several ways. Higher scores were found among people with highest versus lowest numeracy (71 v. 36, P<0.001), highest quantitative literacy (65 v. 28, P<0.001), and highest education (69 v. 42, P=0.004). Scores for 15 physician experts also completing the survey were significantly higher than participants with other postgraduate degrees (mean score 89 v. 69, P<0.001). The medical data interpretation test is a reliable and valid measure of the ability to interpret medical statistics.

  11. Conformity and statistical tolerancing

    NASA Astrophysics Data System (ADS)

    Leblond, Laurent; Pillet, Maurice

    2018-02-01

    Statistical tolerancing was first proposed by Shewhart (Economic Control of Quality of Manufactured Product, (1931) reprinted 1980 by ASQC), in spite of this long history, its use remains moderate. One of the probable reasons for this low utilization is undoubtedly the difficulty for designers to anticipate the risks of this approach. The arithmetic tolerance (worst case) allows a simple interpretation: conformity is defined by the presence of the characteristic in an interval. Statistical tolerancing is more complex in its definition. An interval is not sufficient to define the conformance. To justify the statistical tolerancing formula used by designers, a tolerance interval should be interpreted as the interval where most of the parts produced should probably be located. This tolerance is justified by considering a conformity criterion of the parts guaranteeing low offsets on the latter characteristics. Unlike traditional arithmetic tolerancing, statistical tolerancing requires a sustained exchange of information between design and manufacture to be used safely. This paper proposes a formal definition of the conformity, which we apply successively to the quadratic and arithmetic tolerancing. We introduce a concept of concavity, which helps us to demonstrate the link between tolerancing approach and conformity. We use this concept to demonstrate the various acceptable propositions of statistical tolerancing (in the space decentring, dispersion).

  12. [Triple-type theory of statistics and its application in the scientific research of biomedicine].

    PubMed

    Hu, Liang-ping; Liu, Hui-gang

    2005-07-20

    To point out the crux of why so many people failed to grasp statistics and to bring forth a "triple-type theory of statistics" to solve the problem in a creative way. Based on the experience in long-time teaching and research in statistics, the "three-type theory" was raised and clarified. Examples were provided to demonstrate that the 3 types, i.e., expressive type, prototype and the standardized type are the essentials for people to apply statistics rationally both in theory and practice, and moreover, it is demonstrated by some instances that the "three types" are correlated with each other. It can help people to see the essence by interpreting and analyzing the problems of experimental designs and statistical analyses in medical research work. Investigations reveal that for some questions, the three types are mutually identical; for some questions, the prototype is their standardized type; however, for some others, the three types are distinct from each other. It has been shown that in some multifactor experimental researches, it leads to the nonexistence of the standardized type corresponding to the prototype at all, because some researchers have committed the mistake of "incomplete control" in setting experimental groups. This is a problem which should be solved by the concept and method of "division". Once the "triple-type" for each question is clarified, a proper experimental design and statistical method can be carried out easily. "Triple-type theory of statistics" can help people to avoid committing statistical mistakes or at least to decrease the misuse rate dramatically and improve the quality, level and speed of biomedical research during the process of applying statistics. It can also help people to improve the quality of statistical textbooks and the teaching effect of statistics and it has demonstrated how to advance biomedical statistics.

  13. Radiologist Uncertainty and the Interpretation of Screening

    PubMed Central

    Carney, Patricia A.; Elmore, Joann G.; Abraham, Linn A.; Gerrity, Martha S.; Hendrick, R. Edward; Taplin, Stephen H.; Barlow, William E.; Cutter, Gary R.; Poplack, Steven P.; D’Orsi, Carl J.

    2011-01-01

    Objective To determine radiologists’ reactions to uncertainty when interpreting mammography and the extent to which radiologist uncertainty explains variability in interpretive performance. Methods The authors used a mailed survey to assess demographic and clinical characteristics of radiologists and reactions to uncertainty associated with practice. Responses were linked to radiologists’ actual interpretive performance data obtained from 3 regionally located mammography registries. Results More than 180 radiologists were eligible to participate, and 139 consented for a response rate of 76.8%. Radiologist gender, more years interpreting, and higher volume were associated with lower uncertainty scores. Positive predictive value, recall rates, and specificity were more affected by reactions to uncertainty than sensitivity or negative predictive value; however, none of these relationships was statistically significant. Conclusion Certain practice factors, such as gender and years of interpretive experience, affect uncertainty scores. Radiologists’ reactions to uncertainty do not appear to affect interpretive performance. PMID:15155014

  14. Admixture, Population Structure, and F-Statistics.

    PubMed

    Peter, Benjamin M

    2016-04-01

    Many questions about human genetic history can be addressed by examining the patterns of shared genetic variation between sets of populations. A useful methodological framework for this purpose isF-statistics that measure shared genetic drift between sets of two, three, and four populations and can be used to test simple and complex hypotheses about admixture between populations. This article provides context from phylogenetic and population genetic theory. I review how F-statistics can be interpreted as branch lengths or paths and derive new interpretations, using coalescent theory. I further show that the admixture tests can be interpreted as testing general properties of phylogenies, allowing extension of some ideas applications to arbitrary phylogenetic trees. The new results are used to investigate the behavior of the statistics under different models of population structure and show how population substructure complicates inference. The results lead to simplified estimators in many cases, and I recommend to replace F3 with the average number of pairwise differences for estimating population divergence. Copyright © 2016 by the Genetics Society of America.

  15. SOCR: Statistics Online Computational Resource

    ERIC Educational Resources Information Center

    Dinov, Ivo D.

    2006-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an…

  16. Dynamical interpretation of conditional patterns

    NASA Technical Reports Server (NTRS)

    Adrian, R. J.; Moser, R. D.; Moin, P.

    1988-01-01

    While great progress is being made in characterizing the 3-D structure of organized turbulent motions using conditional averaging analysis, there is a lack of theoretical guidance regarding the interpretation and utilization of such information. Questions concerning the significance of the structures, their contributions to various transport properties, and their dynamics cannot be answered without recourse to appropriate dynamical governing equations. One approach which addresses some of these questions uses the conditional fields as initial conditions and calculates their evolution from the Navier-Stokes equations, yielding valuable information about stability, growth, and longevity of the mean structure. To interpret statistical aspects of the structures, a different type of theory which deals with the structures in the context of their contributions to the statistics of the flow is needed. As a first step toward this end, an effort was made to integrate the structural information from the study of organized structures with a suitable statistical theory. This is done by stochastically estimating the two-point conditional averages that appear in the equation for the one-point probability density function, and relating the structures to the conditional stresses. Salient features of the estimates are identified, and the structure of the one-point estimates in channel flow is defined.

  17. Impact of Immediate Interpretation of Screening Tomosynthesis Mammography on Performance Metrics.

    PubMed

    Winkler, Nicole S; Freer, Phoebe; Anzai, Yoshimi; Hu, Nan; Stein, Matthew

    2018-05-07

    This study aimed to compare performance metrics for immediate and delayed batch interpretation of screening tomosynthesis mammograms. This HIPAA compliant study was approved by institutional review board with a waiver of consent. A retrospective analysis of screening performance metrics for tomosynthesis mammograms interpreted in 2015 when mammograms were read immediately was compared to historical controls from 2013 to 2014 when mammograms were batch interpreted after the patient had departed. A total of 5518 screening tomosynthesis mammograms (n = 1212 for batch interpretation and n = 4306 for immediate interpretation) were evaluated. The larger sample size for the latter group reflects a group practice shift to performing tomosynthesis for the majority of patients. Age, breast density, comparison examinations, and high-risk status were compared. An asymptotic proportion test and multivariable analysis were used to compare performance metrics. There was no statistically significant difference in recall or cancer detection rates for the batch interpretation group compared to immediate interpretation group with respective recall rate of 6.5% vs 5.3% = +1.2% (95% confidence interval -0.3 to 2.7%; P = .101) and cancer detection rate of 6.6 vs 7.2 per thousand = -0.6 (95% confidence interval -5.9 to 4.6; P = .825). There was no statistically significant difference in positive predictive values (PPVs) including PPV1 (screening recall), PPV2 (biopsy recommendation), or PPV 3 (biopsy performed) with batch interpretation (10.1%, 42.1%, and 40.0%, respectively) and immediate interpretation (13.6%, 39.2%, and 39.7%, respectively). After adjusting for age, breast density, high-risk status, and comparison mammogram, there was no difference in the odds of being recalled or cancer detection between the two groups. There is no statistically significant difference in interpretation performance metrics for screening tomosynthesis mammograms interpreted

  18. Forest statistics of western Kentucky

    Treesearch

    The Forest Survey Organization Central States Forest Experiment Station

    1950-01-01

    This Survey Release presents the more significant preliminary statistics on the forest area and timber volume for the western region of Kentucky. Similar reports for the remainder of the state will be published as soon as statistical tabulations are completed. Later, an analytical report for the state will be published which will interpret forest area, timber volume,...

  19. Forest statistics of southern Indiana

    Treesearch

    The Forest Survey Organization Central States Forest Experiment Station

    1951-01-01

    This Survey Release presents the more significant preliminary statistics on the forest area and timber volume for each of the three regions of southern Indiana. A similar report will be published for the two northern Indiana regions. Later, an analytical report for the state will be published which will interpret statistics on forest area, timber- volume, growth, and...

  20. A statistical model for interpreting computerized dynamic posturography data

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.

    2002-01-01

    Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.

  1. Interpretable Decision Sets: A Joint Framework for Description and Prediction

    PubMed Central

    Lakkaraju, Himabindu; Bach, Stephen H.; Jure, Leskovec

    2016-01-01

    One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model’s prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency. PMID:27853627

  2. Interpreting clinical trial results by deductive reasoning: In search of improved trial design.

    PubMed

    Kurbel, Sven; Mihaljević, Slobodan

    2017-10-01

    Clinical trial results are often interpreted by inductive reasoning, in a trial design-limited manner, directed toward modifications of the current clinical practice. Deductive reasoning is an alternative in which results of relevant trials are combined in indisputable premises that lead to a conclusion easily testable in future trials. © 2017 WILEY Periodicals, Inc.

  3. Statistical and population genetics issues of two Hungarian datasets from the aspect of DNA evidence interpretation.

    PubMed

    Szabolcsi, Zoltán; Farkas, Zsuzsa; Borbély, Andrea; Bárány, Gusztáv; Varga, Dániel; Heinrich, Attila; Völgyi, Antónia; Pamjav, Horolma

    2015-11-01

    When the DNA profile from a crime-scene matches that of a suspect, the weight of DNA evidence depends on the unbiased estimation of the match probability of the profiles. For this reason, it is required to establish and expand the databases that reflect the actual allele frequencies in the population applied. 21,473 complete DNA profiles from Databank samples were used to establish the allele frequency database to represent the population of Hungarian suspects. We used fifteen STR loci (PowerPlex ESI16) including five, new ESS loci. The aim was to calculate the statistical, forensic efficiency parameters for the Databank samples and compare the newly detected data to the earlier report. The population substructure caused by relatedness may influence the frequency of profiles estimated. As our Databank profiles were considered non-random samples, possible relationships between the suspects can be assumed. Therefore, population inbreeding effect was estimated using the FIS calculation. The overall inbreeding parameter was found to be 0.0106. Furthermore, we tested the impact of the two allele frequency datasets on 101 randomly chosen STR profiles, including full and partial profiles. The 95% confidence interval estimates for the profile frequencies (pM) resulted in a tighter range when we used the new dataset compared to the previously published ones. We found that the FIS had less effect on frequency values in the 21,473 samples than the application of minimum allele frequency. No genetic substructure was detected by STRUCTURE analysis. Due to the low level of inbreeding effect and the high number of samples, the new dataset provides unbiased and precise estimates of LR for statistical interpretation of forensic casework and allows us to use lower allele frequencies. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Weighted Feature Significance: A Simple, Interpretable Model of Compound Toxicity Based on the Statistical Enrichment of Structural Features

    PubMed Central

    Huang, Ruili; Southall, Noel; Xia, Menghang; Cho, Ming-Hsuang; Jadhav, Ajit; Nguyen, Dac-Trung; Inglese, James; Tice, Raymond R.; Austin, Christopher P.

    2009-01-01

    In support of the U.S. Tox21 program, we have developed a simple and chemically intuitive model we call weighted feature significance (WFS) to predict the toxicological activity of compounds, based on the statistical enrichment of structural features in toxic compounds. We trained and tested the model on the following: (1) data from quantitative high–throughput screening cytotoxicity and caspase activation assays conducted at the National Institutes of Health Chemical Genomics Center, (2) data from Salmonella typhimurium reverse mutagenicity assays conducted by the U.S. National Toxicology Program, and (3) hepatotoxicity data published in the Registry of Toxic Effects of Chemical Substances. Enrichments of structural features in toxic compounds are evaluated for their statistical significance and compiled into a simple additive model of toxicity and then used to score new compounds for potential toxicity. The predictive power of the model for cytotoxicity was validated using an independent set of compounds from the U.S. Environmental Protection Agency tested also at the National Institutes of Health Chemical Genomics Center. We compared the performance of our WFS approach with classical classification methods such as Naive Bayesian clustering and support vector machines. In most test cases, WFS showed similar or slightly better predictive power, especially in the prediction of hepatotoxic compounds, where WFS appeared to have the best performance among the three methods. The new algorithm has the important advantages of simplicity, power, interpretability, and ease of implementation. PMID:19805409

  5. Feature combination networks for the interpretation of statistical machine learning models: application to Ames mutagenicity.

    PubMed

    Webb, Samuel J; Hanser, Thierry; Howlin, Brendan; Krause, Paul; Vessey, Jonathan D

    2014-03-25

    A new algorithm has been developed to enable the interpretation of black box models. The developed algorithm is agnostic to learning algorithm and open to all structural based descriptors such as fragments, keys and hashed fingerprints. The algorithm has provided meaningful interpretation of Ames mutagenicity predictions from both random forest and support vector machine models built on a variety of structural fingerprints.A fragmentation algorithm is utilised to investigate the model's behaviour on specific substructures present in the query. An output is formulated summarising causes of activation and deactivation. The algorithm is able to identify multiple causes of activation or deactivation in addition to identifying localised deactivations where the prediction for the query is active overall. No loss in performance is seen as there is no change in the prediction; the interpretation is produced directly on the model's behaviour for the specific query. Models have been built using multiple learning algorithms including support vector machine and random forest. The models were built on public Ames mutagenicity data and a variety of fingerprint descriptors were used. These models produced a good performance in both internal and external validation with accuracies around 82%. The models were used to evaluate the interpretation algorithm. Interpretation was revealed that links closely with understood mechanisms for Ames mutagenicity. This methodology allows for a greater utilisation of the predictions made by black box models and can expedite further study based on the output for a (quantitative) structure activity model. Additionally the algorithm could be utilised for chemical dataset investigation and knowledge extraction/human SAR development.

  6. Structural interpretation of seismic data and inherent uncertainties

    NASA Astrophysics Data System (ADS)

    Bond, Clare

    2013-04-01

    Geoscience is perhaps unique in its reliance on incomplete datasets and building knowledge from their interpretation. This interpretation basis for the science is fundamental at all levels; from creation of a geological map to interpretation of remotely sensed data. To teach and understand better the uncertainties in dealing with incomplete data we need to understand the strategies individual practitioners deploy that make them effective interpreters. The nature of interpretation is such that the interpreter needs to use their cognitive ability in the analysis of the data to propose a sensible solution in their final output that is both consistent not only with the original data but also with other knowledge and understanding. In a series of experiments Bond et al. (2007, 2008, 2011, 2012) investigated the strategies and pitfalls of expert and non-expert interpretation of seismic images. These studies focused on large numbers of participants to provide a statistically sound basis for analysis of the results. The outcome of these experiments showed that a wide variety of conceptual models were applied to single seismic datasets. Highlighting not only spatial variations in fault placements, but whether interpreters thought they existed at all, or had the same sense of movement. Further, statistical analysis suggests that the strategies an interpreter employs are more important than expert knowledge per se in developing successful interpretations. Experts are successful because of their application of these techniques. In a new set of experiments a small number of experts are focused on to determine how they use their cognitive and reasoning skills, in the interpretation of 2D seismic profiles. Live video and practitioner commentary were used to track the evolving interpretation and to gain insight on their decision processes. The outputs of the study allow us to create an educational resource of expert interpretation through online video footage and commentary with

  7. Perception in statistical graphics

    NASA Astrophysics Data System (ADS)

    VanderPlas, Susan Ruth

    There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

  8. MASH Suite: a user-friendly and versatile software interface for high-resolution mass spectrometry data interpretation and visualization.

    PubMed

    Guner, Huseyin; Close, Patrick L; Cai, Wenxuan; Zhang, Han; Peng, Ying; Gregorich, Zachery R; Ge, Ying

    2014-03-01

    The rapid advancements in mass spectrometry (MS) instrumentation, particularly in Fourier transform (FT) MS, have made the acquisition of high-resolution and high-accuracy mass measurements routine. However, the software tools for the interpretation of high-resolution MS data are underdeveloped. Although several algorithms for the automatic processing of high-resolution MS data are available, there is still an urgent need for a user-friendly interface with functions that allow users to visualize and validate the computational output. Therefore, we have developed MASH Suite, a user-friendly and versatile software interface for processing high-resolution MS data. MASH Suite contains a wide range of features that allow users to easily navigate through data analysis, visualize complex high-resolution MS data, and manually validate automatically processed results. Furthermore, it provides easy, fast, and reliable interpretation of top-down, middle-down, and bottom-up MS data. MASH Suite is convenient, easily operated, and freely available. It can greatly facilitate the comprehensive interpretation and validation of high-resolution MS data with high accuracy and reliability.

  9. The Statistics of wood assays for preservative retention

    Treesearch

    Patricia K. Lebow; Scott W. Conklin

    2011-01-01

    This paper covers general statistical concepts that apply to interpreting wood assay retention values. In particular, since wood assays are typically obtained from a single composited sample, the statistical aspects, including advantages and disadvantages, of simple compositing are covered.

  10. The emergent Copenhagen interpretation of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Hollowood, Timothy J.

    2014-05-01

    We introduce a new and conceptually simple interpretation of quantum mechanics based on reduced density matrices of sub-systems from which the standard Copenhagen interpretation emerges as an effective description of macroscopically large systems. This interpretation describes a world in which definite measurement results are obtained with probabilities that reproduce the Born rule. Wave function collapse is seen to be a useful but fundamentally unnecessary piece of prudent book keeping which is only valid for macro-systems. The new interpretation lies in a class of modal interpretations in that it applies to quantum systems that interact with a much larger environment. However, we show that it does not suffer from the problems that have plagued similar modal interpretations like macroscopic superpositions and rapid flipping between macroscopically distinct states. We describe how the interpretation fits neatly together with fully quantum formulations of statistical mechanics and that a measurement process can be viewed as a process of ergodicity breaking analogous to a phase transition. The key feature of the new interpretation is that joint probabilities for the ergodic subsets of states of disjoint macro-systems only arise as emergent quantities. Finally we give an account of the EPR-Bohm thought experiment and show that the interpretation implies the violation of the Bell inequality characteristic of quantum mechanics but in a way that is rather novel. The final conclusion is that the Copenhagen interpretation gives a completely satisfactory phenomenology of macro-systems interacting with micro-systems.

  11. The Sport Students’ Ability of Literacy and Statistical Reasoning

    NASA Astrophysics Data System (ADS)

    Hidayah, N.

    2017-03-01

    The ability of literacy and statistical reasoning is very important for the students of sport education college due to the materials of statistical learning can be taken from their many activities such as sport competition, the result of test and measurement, predicting achievement based on training, finding connection among variables, and others. This research tries to describe the sport education college students’ ability of literacy and statistical reasoning related to the identification of data type, probability, table interpretation, description and explanation by using bar or pie graphic, explanation of variability, interpretation, the calculation and explanation of mean, median, and mode through an instrument. This instrument is tested to 50 college students majoring in sport resulting only 26% of all students have the ability above 30% while others still below 30%. Observing from all subjects; 56% of students have the ability of identification data classification, 49% of students have the ability to read, display and interpret table through graphic, 27% students have the ability in probability, 33% students have the ability to describe variability, and 16.32% students have the ability to read, count and describe mean, median and mode. The result of this research shows that the sport students’ ability of literacy and statistical reasoning has not been adequate and students’ statistical study has not reached comprehending concept, literary ability trining and statistical rasoning, so it is critical to increase the sport students’ ability of literacy and statistical reasoning

  12. Autoadaptivity and optimization in distributed ECG interpretation.

    PubMed

    Augustyniak, Piotr

    2010-03-01

    This paper addresses principal issues of the ECG interpretation adaptivity in a distributed surveillance network. In the age of pervasive access to wireless digital communication, distributed biosignal interpretation networks may not only optimally solve difficult medical cases, but also adapt the data acquisition, interpretation, and transmission to the variable patient's status and availability of technical resources. The background of such adaptivity is the innovative use of results from the automatic ECG analysis to the seamless remote modification of the interpreting software. Since the medical relevance of issued diagnostic data depends on the patient's status, the interpretation adaptivity implies the flexibility of report content and frequency. Proposed solutions are based on the research on human experts behavior, procedures reliability, and usage statistics. Despite the limited scale of our prototype client-server application, the tests yielded very promising results: the transmission channel occupation was reduced by 2.6 to 5.6 times comparing to the rigid reporting mode and the improvement of the remotely computed diagnostic outcome was achieved in case of over 80% of software adaptation attempts.

  13. Students' Interpretation of a Function Associated with a Real-Life Problem from Its Graph

    ERIC Educational Resources Information Center

    Mahir, Nevin

    2010-01-01

    The properties of a function such as limit, continuity, derivative, growth, or concavity can be determined more easily from its graph than by doing any algebraic operation. For this reason, it is important for students of mathematics to interpret some of the properties of a function from its graph. In this study, we investigated the competence of…

  14. A decision support system and rule-based algorithm to augment the human interpretation of the 12-lead electrocardiogram.

    PubMed

    Cairns, Andrew W; Bond, Raymond R; Finlay, Dewar D; Guldenring, Daniel; Badilini, Fabio; Libretti, Guido; Peace, Aaron J; Leslie, Stephen J

    The 12-lead Electrocardiogram (ECG) has been used to detect cardiac abnormalities in the same format for more than 70years. However, due to the complex nature of 12-lead ECG interpretation, there is a significant cognitive workload required from the interpreter. This complexity in ECG interpretation often leads to errors in diagnosis and subsequent treatment. We have previously reported on the development of an ECG interpretation support system designed to augment the human interpretation process. This computerised decision support system has been named 'Interactive Progressive based Interpretation' (IPI). In this study, a decision support algorithm was built into the IPI system to suggest potential diagnoses based on the interpreter's annotations of the 12-lead ECG. We hypothesise semi-automatic interpretation using a digital assistant can be an optimal man-machine model for ECG interpretation. To improve interpretation accuracy and reduce missed co-abnormalities. The Differential Diagnoses Algorithm (DDA) was developed using web technologies where diagnostic ECG criteria are defined in an open storage format, Javascript Object Notation (JSON), which is queried using a rule-based reasoning algorithm to suggest diagnoses. To test our hypothesis, a counterbalanced trial was designed where subjects interpreted ECGs using the conventional approach and using the IPI+DDA approach. A total of 375 interpretations were collected. The IPI+DDA approach was shown to improve diagnostic accuracy by 8.7% (although not statistically significant, p-value=0.1852), the IPI+DDA suggested the correct interpretation more often than the human interpreter in 7/10 cases (varying statistical significance). Human interpretation accuracy increased to 70% when seven suggestions were generated. Although results were not found to be statistically significant, we found; 1) our decision support tool increased the number of correct interpretations, 2) the DDA algorithm suggested the correct

  15. Origin of Disagreements in Tandem Mass Spectra Interpretation by Search Engines.

    PubMed

    Tessier, Dominique; Lollier, Virginie; Larré, Colette; Rogniaux, Hélène

    2016-10-07

    Several proteomic database search engines that interpret LC-MS/MS data do not identify the same set of peptides. These disagreements occur even when the scores of the peptide-to-spectrum matches suggest good confidence in the interpretation. Our study shows that these disagreements observed for the interpretations of a given spectrum are almost exclusively due to the variation of what we call the "peptide space", i.e., the set of peptides that are actually compared to the experimental spectra. We discuss the potential difficulties of precisely defining the "peptide space." Indeed, although several parameters that are generally reported in publications can easily be set to the same values, many additional parameters-with much less straightforward user access-might impact the "peptide space" used by each program. Moreover, in a configuration where each search engine identifies the same candidates for each spectrum, the inference of the proteins may remain quite different depending on the false discovery rate selected.

  16. Cancer Survival: An Overview of Measures, Uses, and Interpretation

    PubMed Central

    Noone, Anne-Michelle; Howlader, Nadia; Cho, Hyunsoon; Keel, Gretchen E.; Garshell, Jessica; Woloshin, Steven; Schwartz, Lisa M.

    2014-01-01

    Survival statistics are of great interest to patients, clinicians, researchers, and policy makers. Although seemingly simple, survival can be confusing: there are many different survival measures with a plethora of names and statistical methods developed to answer different questions. This paper aims to describe and disseminate different survival measures and their interpretation in less technical language. In addition, we introduce templates to summarize cancer survival statistic organized by their specific purpose: research and policy versus prognosis and clinical decision making. PMID:25417231

  17. Statistics Poster Challenge for Schools

    ERIC Educational Resources Information Center

    Payne, Brad; Freeman, Jenny; Stillman, Eleanor

    2013-01-01

    The analysis and interpretation of data are important life skills. A poster challenge for schoolchildren provides an innovative outlet for these skills and demonstrates their relevance to daily life. We discuss our Statistics Poster Challenge and the lessons we have learned.

  18. Multivariate statistical analysis strategy for multiple misfire detection in internal combustion engines

    NASA Astrophysics Data System (ADS)

    Hu, Chongqing; Li, Aihua; Zhao, Xingyang

    2011-02-01

    This paper proposes a multivariate statistical analysis approach to processing the instantaneous engine speed signal for the purpose of locating multiple misfire events in internal combustion engines. The state of each cylinder is described with a characteristic vector extracted from the instantaneous engine speed signal following a three-step procedure. These characteristic vectors are considered as the values of various procedure parameters of an engine cycle. Therefore, determination of occurrence of misfire events and identification of misfiring cylinders can be accomplished by a principal component analysis (PCA) based pattern recognition methodology. The proposed algorithm can be implemented easily in practice because the threshold can be defined adaptively without the information of operating conditions. Besides, the effect of torsional vibration on the engine speed waveform is interpreted as the presence of super powerful cylinder, which is also isolated by the algorithm. The misfiring cylinder and the super powerful cylinder are often adjacent in the firing sequence, thus missing detections and false alarms can be avoided effectively by checking the relationship between the cylinders.

  19. Cancer survival: an overview of measures, uses, and interpretation.

    PubMed

    Mariotto, Angela B; Noone, Anne-Michelle; Howlader, Nadia; Cho, Hyunsoon; Keel, Gretchen E; Garshell, Jessica; Woloshin, Steven; Schwartz, Lisa M

    2014-11-01

    Survival statistics are of great interest to patients, clinicians, researchers, and policy makers. Although seemingly simple, survival can be confusing: there are many different survival measures with a plethora of names and statistical methods developed to answer different questions. This paper aims to describe and disseminate different survival measures and their interpretation in less technical language. In addition, we introduce templates to summarize cancer survival statistic organized by their specific purpose: research and policy versus prognosis and clinical decision making. Published by Oxford University Press 2014.

  20. Visualization of the variability of 3D statistical shape models by animation.

    PubMed

    Lamecker, Hans; Seebass, Martin; Lange, Thomas; Hege, Hans-Christian; Deuflhard, Peter

    2004-01-01

    Models of the 3D shape of anatomical objects and the knowledge about their statistical variability are of great benefit in many computer assisted medical applications like images analysis, therapy or surgery planning. Statistical model of shapes have successfully been applied to automate the task of image segmentation. The generation of 3D statistical shape models requires the identification of corresponding points on two shapes. This remains a difficult problem, especially for shapes of complicated topology. In order to interpret and validate variations encoded in a statistical shape model, visual inspection is of great importance. This work describes the generation and interpretation of statistical shape models of the liver and the pelvic bone.

  1. Proper interpretation of chronic toxicity studies and their statistics: A critique of "Which level of evidence does the US National Toxicology Program provide? Statistical considerations using the Technical Report 578 on Ginkgo biloba as an example".

    PubMed

    Kissling, Grace E; Haseman, Joseph K; Zeiger, Errol

    2015-09-02

    A recent article by Gaus (2014) demonstrates a serious misunderstanding of the NTP's statistical analysis and interpretation of rodent carcinogenicity data as reported in Technical Report 578 (Ginkgo biloba) (NTP, 2013), as well as a failure to acknowledge the abundant literature on false positive rates in rodent carcinogenicity studies. The NTP reported Ginkgo biloba extract to be carcinogenic in mice and rats. Gaus claims that, in this study, 4800 statistical comparisons were possible, and that 209 of them were statistically significant (p<0.05) compared with 240 (4800×0.05) expected by chance alone; thus, the carcinogenicity of Ginkgo biloba extract cannot be definitively established. However, his assumptions and calculations are flawed since he incorrectly assumes that the NTP uses no correction for multiple comparisons, and that significance tests for discrete data operate at exactly the nominal level. He also misrepresents the NTP's decision making process, overstates the number of statistical comparisons made, and ignores the fact that the mouse liver tumor effects were so striking (e.g., p<0.0000000000001) that it is virtually impossible that they could be false positive outcomes. Gaus' conclusion that such obvious responses merely "generate a hypothesis" rather than demonstrate a real carcinogenic effect has no scientific credibility. Moreover, his claims regarding the high frequency of false positive outcomes in carcinogenicity studies are misleading because of his methodological misconceptions and errors. Published by Elsevier Ireland Ltd.

  2. Statistically Valid Planting Trials

    Treesearch

    C. B. Briscoe

    1961-01-01

    More than 100 million tree seedlings are planted each year in Latin America, and at least ten time'that many should be planted Rational control and development of a program of such magnitude require establishing and interpreting carefully planned trial plantings which will yield statistically valid answers to real and important questions. Unfortunately, many...

  3. Statistics of Statisticians: Critical Mass of Statistics and Operational Research Groups

    NASA Astrophysics Data System (ADS)

    Kenna, Ralph; Berche, Bertrand

    Using a recently developed model, inspired by mean field theory in statistical physics, and data from the UK's Research Assessment Exercise, we analyse the relationship between the qualities of statistics and operational research groups and the quantities of researchers in them. Similar to other academic disciplines, we provide evidence for a linear dependency of quality on quantity up to an upper critical mass, which is interpreted as the average maximum number of colleagues with whom a researcher can communicate meaningfully within a research group. The model also predicts a lower critical mass, which research groups should strive to achieve to avoid extinction. For statistics and operational research, the lower critical mass is estimated to be 9 ± 3. The upper critical mass, beyond which research quality does not significantly depend on group size, is 17 ± 6.

  4. Statistics for Radiology Research.

    PubMed

    Obuchowski, Nancy A; Subhas, Naveen; Polster, Joshua

    2017-02-01

    Biostatistics is an essential component in most original research studies in imaging. In this article we discuss five key statistical concepts for study design and analyses in modern imaging research: statistical hypothesis testing, particularly focusing on noninferiority studies; imaging outcomes especially when there is no reference standard; dealing with the multiplicity problem without spending all your study power; relevance of confidence intervals in reporting and interpreting study results; and finally tools for assessing quantitative imaging biomarkers. These concepts are presented first as examples of conversations between investigator and biostatistician, and then more detailed discussions of the statistical concepts follow. Three skeletal radiology examples are used to illustrate the concepts. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  5. Simulation of target interpretation based on infrared image features and psychology principle

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Chen, Yu-hua; Gao, Hong-sheng; Wang, Zhan-feng; Wang, Ji-jun; Su, Rong-hua; Huang, Yan-ping

    2009-07-01

    It's an important and complicated process in target interpretation that target features extraction and identification, which effect psychosensorial quantity of interpretation person to target infrared image directly, and decide target viability finally. Using statistical decision theory and psychology principle, designing four psychophysical experiment, the interpretation model of the infrared target is established. The model can get target detection probability by calculating four features similarity degree between target region and background region, which were plotted out on the infrared image. With the verification of a great deal target interpretation in practice, the model can simulate target interpretation and detection process effectively, get the result of target interpretation impersonality, which can provide technique support for target extraction, identification and decision-making.

  6. The Effect Size Statistic: Overview of Various Choices.

    ERIC Educational Resources Information Center

    Mahadevan, Lakshmi

    Over the years, methodologists have been recommending that researchers use magnitude of effect estimates in result interpretation to highlight the distinction between statistical and practical significance (cf. R. Kirk, 1996). A magnitude of effect statistic (i.e., effect size) tells to what degree the dependent variable can be controlled,…

  7. Chi-Square Statistics, Tests of Hypothesis and Technology.

    ERIC Educational Resources Information Center

    Rochowicz, John A.

    The use of technology such as computers and programmable calculators enables students to find p-values and conduct tests of hypotheses in many different ways. Comprehension and interpretation of a research problem become the focus for statistical analysis. This paper describes how to calculate chisquare statistics and p-values for statistical…

  8. Investigation of Statistical Inference Methodologies Through Scale Model Propagation Experiments

    DTIC Science & Technology

    2015-09-30

    statistical inference methodologies for ocean- acoustic problems by investigating and applying statistical methods to data collected from scale-model...to begin planning experiments for statistical inference applications. APPROACH In the ocean acoustics community over the past two decades...solutions for waveguide parameters. With the introduction of statistical inference to the field of ocean acoustics came the desire to interpret marginal

  9. Multi-scale structure and topological anomaly detection via a new network statistic: The onion decomposition.

    PubMed

    Hébert-Dufresne, Laurent; Grochow, Joshua A; Allard, Antoine

    2016-08-18

    We introduce a network statistic that measures structural properties at the micro-, meso-, and macroscopic scales, while still being easy to compute and interpretable at a glance. Our statistic, the onion spectrum, is based on the onion decomposition, which refines the k-core decomposition, a standard network fingerprinting method. The onion spectrum is exactly as easy to compute as the k-cores: It is based on the stages at which each vertex gets removed from a graph in the standard algorithm for computing the k-cores. Yet, the onion spectrum reveals much more information about a network, and at multiple scales; for example, it can be used to quantify node heterogeneity, degree correlations, centrality, and tree- or lattice-likeness. Furthermore, unlike the k-core decomposition, the combined degree-onion spectrum immediately gives a clear local picture of the network around each node which allows the detection of interesting subgraphs whose topological structure differs from the global network organization. This local description can also be leveraged to easily generate samples from the ensemble of networks with a given joint degree-onion distribution. We demonstrate the utility of the onion spectrum for understanding both static and dynamic properties on several standard graph models and on many real-world networks.

  10. On Some Assumptions of the Null Hypothesis Statistical Testing

    ERIC Educational Resources Information Center

    Patriota, Alexandre Galvão

    2017-01-01

    Bayesian and classical statistical approaches are based on different types of logical principles. In order to avoid mistaken inferences and misguided interpretations, the practitioner must respect the inference rules embedded into each statistical method. Ignoring these principles leads to the paradoxical conclusions that the hypothesis…

  11. New dimensions from statistical graphics for GIS (geographic information system) analysis and interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCord, R.A.; Olson, R.J.

    1988-01-01

    Environmental research and assessment activities at Oak Ridge National Laboratory (ORNL) include the analysis of spatial and temporal patterns of ecosystem response at a landscape scale. Analysis through use of geographic information system (GIS) involves an interaction between the user and thematic data sets frequently expressed as maps. A portion of GIS analysis has a mathematical or statistical aspect, especially for the analysis of temporal patterns. ARC/INFO is an excellent tool for manipulating GIS data and producing the appropriate map graphics. INFO also has some limited ability to produce statistical tabulation. At ORNL we have extended our capabilities by graphicallymore » interfacing ARC/INFO and SAS/GRAPH to provide a combined mapping and statistical graphics environment. With the data management, statistical, and graphics capabilities of SAS added to ARC/INFO, we have expanded the analytical and graphical dimensions of the GIS environment. Pie or bar charts, frequency curves, hydrographs, or scatter plots as produced by SAS can be added to maps from attribute data associated with ARC/INFO coverages. Numerous, small, simplified graphs can also become a source of complex map ''symbols.'' These additions extend the dimensions of GIS graphics to include time, details of the thematic composition, distribution, and interrelationships. 7 refs., 3 figs.« less

  12. Hold My Calls: An Activity for Introducing the Statistical Process

    ERIC Educational Resources Information Center

    Abel, Todd; Poling, Lisa

    2015-01-01

    Working with practicing teachers, this article demonstrates, through the facilitation of a statistical activity, how to introduce and investigate the unique qualities of the statistical process including: formulate a question, collect data, analyze data, and interpret data.

  13. The disagreeable behaviour of the kappa statistic.

    PubMed

    Flight, Laura; Julious, Steven A

    2015-01-01

    It is often of interest to measure the agreement between a number of raters when an outcome is nominal or ordinal. The kappa statistic is used as a measure of agreement. The statistic is highly sensitive to the distribution of the marginal totals and can produce unreliable results. Other statistics such as the proportion of concordance, maximum attainable kappa and prevalence and bias adjusted kappa should be considered to indicate how well the kappa statistic represents agreement in the data. Each kappa should be considered and interpreted based on the context of the data being analysed. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Easily disassembled electrical connector for high voltage, high frequency connections

    DOEpatents

    Milner, Joseph R.

    1994-01-01

    An easily accessible electrical connector capable of rapid assembly and disassembly wherein a wide metal conductor sheet may be evenly contacted over the entire width of the conductor sheet by opposing surfaces on the connector which provide an even clamping pressure against opposite surfaces of the metal conductor sheet using a single threaded actuating screw.

  15. Proper interpretation of chronic toxicity studies and their statistics: A critique of “Which level of evidence does the US National Toxicology Program provide? Statistical considerations using the Technical Report 578 on Ginkgo biloba as an example”

    PubMed Central

    Kissling, Grace E.; Haseman, Joseph K.; Zeiger, Errol

    2014-01-01

    A recent article by Gaus (2014) demonstrates a serious misunderstanding of the NTP’s statistical analysis and interpretation of rodent carcinogenicity data as reported in Technical Report 578 (Ginkgo biloba) (NTP 2013), as well as a failure to acknowledge the abundant literature on false positive rates in rodent carcinogenicity studies. The NTP reported Ginkgo biloba extract to be carcinogenic in mice and rats. Gaus claims that, in this study, 4800 statistical comparisons were possible, and that 209 of them were statistically significant (p<0.05) compared with 240 (4800 × 0.05) expected by chance alone; thus, the carcinogenicity of Ginkgo biloba extract cannot be definitively established. However, his assumptions and calculations are flawed since he incorrectly assumes that the NTP uses no correction for multiple comparisons, and that significance tests for discrete data operate at exactly the nominal level. He also misrepresents the NTP’s decision making process, overstates the number of statistical comparisons made, and ignores that fact that that the mouse liver tumor effects were so striking (e.g., p<0.0000000000001) that it is virtually impossible that they could be false positive outcomes. Gaus’ conclusion that such obvious responses merely “generate a hypothesis” rather than demonstrate a real carcinogenic effect has no scientific credibility. Moreover, his claims regarding the high frequency of false positive outcomes in carcinogenicity studies are misleading because of his methodological misconceptions and errors. PMID:25261588

  16. Interpreting estimates of heritability--a note on the twin decomposition.

    PubMed

    Stenberg, Anders

    2013-03-01

    While most outcomes may in part be genetically mediated, quantifying genetic heritability is a different matter. To explore data on twins and decompose the variation is a classical method to determine whether variation in outcomes, e.g. IQ or schooling, originate from genetic endowments or environmental factors. Despite some criticism, the model is still widely used. The critique is generally related to how estimates of heritability may encompass environmental mediation. This aspect is sometimes left implicit by authors even though its relevance for the interpretation is potentially profound. This short note is an appeal for clarity from authors when interpreting the magnitude of heritability estimates. It is demonstrated how disregarding existing theoretical contributions can easily lead to unnecessary misinterpretations and/or controversies. The key arguments are relevant also for estimates based on data of adopted children or from modern molecular genetics research. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    PubMed

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. ALISE Library and Information Science Education Statistical Report, 1999.

    ERIC Educational Resources Information Center

    Daniel, Evelyn H., Ed.; Saye, Jerry D., Ed.

    This volume is the twentieth annual statistical report on library and information science (LIS) education published by the Association for Library and Information Science Education (ALISE). Its purpose is to compile, analyze, interpret, and report statistical (and other descriptive) information about library/information science programs offered by…

  19. Interpreting psychoanalytic interpretation: a fourfold perspective.

    PubMed

    Schermer, Victor L

    2011-12-01

    Following an overview of psychoanalytic interpretation in theory, practice, and historical context, as well as the question of whether interpretations have scientific validity, the author holds that hermeneutics, the philosophical and psychological study of interpretation, provides a rich understanding of recent developments in self psychology, inter-subjective and relational perspectives, attachment theory, and psycho-spiritual views on psychoanalytic process. He then offers four distinct hermeneutical vantage points regarding interpretation in the psychoanalytic context, including (1) Freud's adaptation of the Aristotelian view of interpretation as the uncovering of a set of predetermined meanings and structures; (2) the phenomenological view of interpretation as the laying bare of "the things themselves," that is, removing the coverings of objectification and concretization imposed by social norms and the conscious ego; (3) the dialogical existential view of interpretation as an ongoing relational process; and (4) the transformational understanding in which interpretation evokes a "presence" that transforms both patient and analyst. He concludes by contending that these perspectives are not mutually exclusive ways of conducting an analysis, but rather that all occur within the analyst's suspended attention, the caregiving and holding essential to good therapeutic outcomes, and the mutuality of the psychoanalytic dialogue.

  20. 29 CFR 2509.08-2 - Interpretive bulletin relating to the exercise of shareholder rights and written statements of...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... interpretive bulletin provides guidance on the appropriateness under ERISA of active monitoring of corporate..., ERISA Sec. 404(a)(1)(D) does not shield the investment manager from liability for imprudent actions... long-term investments or where a plan may not be able to easily dispose such an investment. Active...

  1. 29 CFR 2509.08-2 - Interpretive bulletin relating to the exercise of shareholder rights and written statements of...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... interpretive bulletin provides guidance on the appropriateness under ERISA of active monitoring of corporate..., ERISA Sec. 404(a)(1)(D) does not shield the investment manager from liability for imprudent actions... long-term investments or where a plan may not be able to easily dispose such an investment. Active...

  2. 29 CFR 2509.08-2 - Interpretive bulletin relating to the exercise of shareholder rights and written statements of...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... interpretive bulletin provides guidance on the appropriateness under ERISA of active monitoring of corporate..., ERISA Sec. 404(a)(1)(D) does not shield the investment manager from liability for imprudent actions... long-term investments or where a plan may not be able to easily dispose such an investment. Active...

  3. 29 CFR 2509.08-2 - Interpretive bulletin relating to the exercise of shareholder rights and written statements of...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... interpretive bulletin provides guidance on the appropriateness under ERISA of active monitoring of corporate..., ERISA Sec. 404(a)(1)(D) does not shield the investment manager from liability for imprudent actions... long-term investments or where a plan may not be able to easily dispose such an investment. Active...

  4. Easily disassembled electrical connector for high voltage, high frequency connections

    DOEpatents

    Milner, J.R.

    1994-05-10

    An easily accessible electrical connector capable of rapid assembly and disassembly is described wherein a wide metal conductor sheet may be evenly contacted over the entire width of the conductor sheet by opposing surfaces on the connector which provide an even clamping pressure against opposite surfaces of the metal conductor sheet using a single threaded actuating screw. 13 figures.

  5. SOCR: Statistics Online Computational Resource

    PubMed Central

    Dinov, Ivo D.

    2011-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR). This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student’s intuition and enhance their learning. PMID:21451741

  6. Teaching Business Statistics with Real Data to Undergraduates and the Use of Technology in the Class Room

    ERIC Educational Resources Information Center

    Singamsetti, Rao

    2007-01-01

    In this paper an attempt is made to highlight some issues of interpretation of statistical concepts and interpretation of results as taught in undergraduate Business statistics courses. The use of modern technology in the class room is shown to have increased the efficiency and the ease of learning and teaching in statistics. The importance of…

  7. Decomposing biodiversity data using the Latent Dirichlet Allocation model, a probabilistic multivariate statistical method

    Treesearch

    Denis Valle; Benjamin Baiser; Christopher W. Woodall; Robin Chazdon; Jerome Chave

    2014-01-01

    We propose a novel multivariate method to analyse biodiversity data based on the Latent Dirichlet Allocation (LDA) model. LDA, a probabilistic model, reduces assemblages to sets of distinct component communities. It produces easily interpretable results, can represent abrupt and gradual changes in composition, accommodates missing data and allows for coherent estimates...

  8. An easily regenerable enzyme reactor prepared from polymerized high internal phase emulsions.

    PubMed

    Ruan, Guihua; Wu, Zhenwei; Huang, Yipeng; Wei, Meiping; Su, Rihui; Du, Fuyou

    2016-04-22

    A large-scale high-efficient enzyme reactor based on polymerized high internal phase emulsion monolith (polyHIPE) was prepared. First, a porous cross-linked polyHIPE monolith was prepared by in-situ thermal polymerization of a high internal phase emulsion containing styrene, divinylbenzene and polyglutaraldehyde. The enzyme of TPCK-Trypsin was then immobilized on the monolithic polyHIPE. The performance of the resultant enzyme reactor was assessed according to the conversion ability of Nα-benzoyl-l-arginine ethyl ester to Nα-benzoyl-l-arginine, and the protein digestibility of bovine serum albumin (BSA) and cytochrome (Cyt-C). The results showed that the prepared enzyme reactor exhibited high enzyme immobilization efficiency and fast and easy-control protein digestibility. BSA and Cyt-C could be digested in 10 min with sequence coverage of 59% and 78%, respectively. The peptides and residual protein could be easily rinsed out from reactor and the reactor could be regenerated easily with 4 M HCl without any structure destruction. Properties of multiple interconnected chambers with good permeability, fast digestion facility and easily reproducibility indicated that the polyHIPE enzyme reactor was a good selector potentially applied in proteomics and catalysis areas. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Multi-Reader ROC studies with Split-Plot Designs: A Comparison of Statistical Methods

    PubMed Central

    Obuchowski, Nancy A.; Gallas, Brandon D.; Hillis, Stephen L.

    2012-01-01

    Rationale and Objectives Multi-reader imaging trials often use a factorial design, where study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of the design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper we compare three methods of analysis for the split-plot design. Materials and Methods Three statistical methods are presented: Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean ANOVA approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power and confidence interval coverage of the three test statistics. Results The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% CIs fall close to the nominal coverage for small and large sample sizes. Conclusions The split-plot MRMC study design can be statistically efficient compared with the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rate, similar power, and nominal CI coverage, are available for this study design. PMID:23122570

  10. Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.

    PubMed

    Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L

    2012-12-01

    Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.

  11. An Experimental Approach to Teaching and Learning Elementary Statistical Mechanics

    ERIC Educational Resources Information Center

    Ellis, Frank B.; Ellis, David C.

    2008-01-01

    Introductory statistical mechanics is studied for a simple two-state system using an inexpensive and easily built apparatus. A large variety of demonstrations, suitable for students in high school and introductory university chemistry courses, are possible. This article details demonstrations for exothermic and endothermic reactions, the dynamic…

  12. Calibrated Peer Review for Interpreting Linear Regression Parameters: Results from a Graduate Course

    ERIC Educational Resources Information Center

    Enders, Felicity B.; Jenkins, Sarah; Hoverman, Verna

    2010-01-01

    Biostatistics is traditionally a difficult subject for students to learn. While the mathematical aspects are challenging, it can also be demanding for students to learn the exact language to use to correctly interpret statistical results. In particular, correctly interpreting the parameters from linear regression is both a vital tool and a…

  13. Developing and Assessing Students' Abilities To Interpret Research.

    ERIC Educational Resources Information Center

    Forsyth, G. Alfred; And Others

    A recent conference on statistics education recommended that more emphasis be placed on the interpretation of research (IOR). Ways for developing and assessing IOR and providing a systematic framework for creating and selecting instructional materials for the independent assessment of specific IOR concepts are the focus of this paper. The…

  14. Interpretations of Boxplots: Helping Middle School Students to Think outside the Box

    ERIC Educational Resources Information Center

    Edwards, Thomas G.; Özgün-Koca, Asli; Barr, John

    2017-01-01

    Boxplots are statistical representations for organizing and displaying data that are relatively easy to create with a five-number summary. However, boxplots are not as easy to understand, interpret, or connect with other statistical representations of the same data. We worked at two different schools with 259 middle school students who constructed…

  15. Securing wide appreciation of health statistics

    PubMed Central

    Pyrrait, A. M. DO Amaral; Aubenque, M. J.; Benjamin, B.; DE Groot, Meindert J. W.; Kohn, R.

    1954-01-01

    All the authors are agreed on the need for a certain publicizing of health statistics, but do Amaral Pyrrait points out that the medical profession prefers to convince itself rather than to be convinced. While there is great utility in articles and reviews in the professional press (especially for paramedical personnel) Aubenque, de Groot, and Kohn show how appreciation can effectively be secured by making statistics more easily understandable to the non-expert by, for instance, including readable commentaries in official publications, simplifying charts and tables, and preparing simple manuals on statistical methods. Aubenque and Kohn also stress the importance of linking health statistics to other economic and social information. Benjamin suggests that the principles of market research could to advantage be applied to health statistics to determine the precise needs of the “consumers”. At the same time, Aubenque points out that the value of the ultimate results must be clear to those who provide the data; for this, Kohn suggests that the enumerators must know exactly what is wanted and why. There is general agreement that some explanation of statistical methods and their uses should be given in the curricula of medical schools and that lectures and postgraduate courses should be arranged for practising physicians. PMID:13199668

  16. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    ERIC Educational Resources Information Center

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  17. Exploring the Relationship Between Eye Movements and Electrocardiogram Interpretation Accuracy

    NASA Astrophysics Data System (ADS)

    Davies, Alan; Brown, Gavin; Vigo, Markel; Harper, Simon; Horseman, Laura; Splendiani, Bruno; Hill, Elspeth; Jay, Caroline

    2016-12-01

    Interpretation of electrocardiograms (ECGs) is a complex task involving visual inspection. This paper aims to improve understanding of how practitioners perceive ECGs, and determine whether visual behaviour can indicate differences in interpretation accuracy. A group of healthcare practitioners (n = 31) who interpret ECGs as part of their clinical role were shown 11 commonly encountered ECGs on a computer screen. The participants’ eye movement data were recorded as they viewed the ECGs and attempted interpretation. The Jensen-Shannon distance was computed for the distance between two Markov chains, constructed from the transition matrices (visual shifts from and to ECG leads) of the correct and incorrect interpretation groups for each ECG. A permutation test was then used to compare this distance against 10,000 randomly shuffled groups made up of the same participants. The results demonstrated a statistically significant (α  0.05) result in 5 of the 11 stimuli demonstrating that the gaze shift between the ECG leads is different between the groups making correct and incorrect interpretations and therefore a factor in interpretation accuracy. The results shed further light on the relationship between visual behaviour and ECG interpretation accuracy, providing information that can be used to improve both human and automated interpretation approaches.

  18. Combining natural background levels (NBLs) assessment with indicator kriging analysis to improve groundwater quality data interpretation and management.

    PubMed

    Ducci, Daniela; de Melo, M Teresa Condesso; Preziosi, Elisabetta; Sellerino, Mariangela; Parrone, Daniele; Ribeiro, Luis

    2016-11-01

    The natural background level (NBL) concept is revisited and combined with indicator kriging method to analyze the spatial distribution of groundwater quality within a groundwater body (GWB). The aim is to provide a methodology to easily identify areas with the same probability of exceeding a given threshold (which may be a groundwater quality criteria, standards, or recommended limits for selected properties and constituents). Three case studies with different hydrogeological settings and located in two countries (Portugal and Italy) are used to derive NBL using the preselection method and validate the proposed methodology illustrating its main advantages over conventional statistical water quality analysis. Indicator kriging analysis was used to create probability maps of the three potential groundwater contaminants. The results clearly indicate the areas within a groundwater body that are potentially contaminated because the concentrations exceed the drinking water standards or even the local NBL, and cannot be justified by geogenic origin. The combined methodology developed facilitates the management of groundwater quality because it allows for the spatial interpretation of NBL values. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Statistics 101 for Radiologists.

    PubMed

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.

  20. "Magnitude-based inference": a statistical review.

    PubMed

    Welsh, Alan H; Knight, Emma J

    2015-04-01

    We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.

  1. Statistical Estimation of Heterogeneities: A New Frontier in Well Testing

    NASA Astrophysics Data System (ADS)

    Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.

    2001-12-01

    Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.

  2. Statistical and methodological considerations for the interpretation of intranasal oxytocin studies

    PubMed Central

    Walum, Hasse; Waldman, Irwin D.; Young, Larry J.

    2015-01-01

    Over the last decade, oxytocin (OT) has received focus in numerous studies associating intranasal administration of this peptide with various aspects of human social behavior. These studies in humans are inspired by animal research, especially in rodents, showing that central manipulations of the OT system affect behavioral phenotypes related to social cognition, including parental behavior, social bonding and individual recognition. Taken together, these studies in humans appear to provide compelling, but sometimes bewildering evidence for the role of OT in influencing a vast array of complex social cognitive processes in humans. In this paper we investigate to what extent the human intranasal OT literature lends support to the hypothesis that intranasal OT consistently influences a wide spectrum of social behavior in humans. We do this by considering statistical features of studies within this field, including factors like statistical power, pre-study odds and bias. Our conclusion is that intranasal OT studies are generally underpowered and that there is a high probability that most of the published intranasal OT findings do not represent true effects. Thus the remarkable reports that intranasal OT influences a large number of human social behaviors should be viewed with healthy skepticism, and we make recommendations to improve the reliability of human OT studies in the future. PMID:26210057

  3. Digital Image Quality And Interpretability: Database And Hardcopy Studies

    NASA Astrophysics Data System (ADS)

    Snyder, H. L.; Maddox, M. E.; Shedivy, D. I.; Turpin, J. A.; Burke, J. J.; Strickland, R. N.

    1982-02-01

    Two hundred fifty transparencies, displaying a new digital database consisting of 25 degraded versions (5 blur levels x 5 noise levels) of each of 10 digitized, first-generation positive transparencies, were used in two experiments involving 15 trained military photointer-preters. Each image is 86 mm square and represents 40962 8-bit pixels. In the "interpretation" experiment, each photointerpreter (judge) spent approximately two days extracting essential elements of information (EEls) from one degraded version of each scene at a constant Gaussian blur level (FWHM = 40, 84, or 322 Am). In the scaling experiment, each judge assigned a numerical value to each of the 250 images, according to its perceived position on a 10-point NATO-standardized scale (0 = useless through 9 = nearly perfect), to the nearest 0.1 unit. Eighty-eight of the 100 possible values were used by the judges, indicating that 62 categories, based on the Shannon-Wiener measure of information, are needed to scale these hardcopy images. The overall correlation between the scaling and interpretation results was 0.9. Though the main effect of blur was not statistically significant in the interpretation experiment, that of noise was significant, and all main factors (blur, noise, scene, order of battle) and most interactions were statistically significant in the scaling experiment.

  4. On Ruch's Principle of Decreasing Mixing Distance in classical statistical physics

    NASA Astrophysics Data System (ADS)

    Busch, Paul; Quadt, Ralf

    1990-10-01

    Ruch's Principle of Decreasing Mixing Distance is reviewed as a statistical physical principle and its basic suport and geometric interpretation, the Ruch-Schranner-Seligman theorem, is generalized to be applicable to a large representative class of classical statistical systems.

  5. College Students' Interpretation of Research Reports on Group Differences: The Tall-Tale Effect

    ERIC Educational Resources Information Center

    Hogan, Thomas P.; Zaboski, Brian A.; Perry, Tiffany R.

    2015-01-01

    How does the student untrained in advanced statistics interpret results of research that reports a group difference? In two studies, statistically untrained college students were presented with abstracts or professional associations' reports and asked for estimates of scores obtained by the original participants in the studies. These estimates…

  6. ADHD Rating Scale-IV: Checklists, Norms, and Clinical Interpretation

    ERIC Educational Resources Information Center

    Pappas, Danielle

    2006-01-01

    This article reviews the "ADHD Rating Scale-IV: Checklist, norms, and clinical interpretation," is a norm-referenced checklist that measures the symptoms of attention deficit/hyperactivity disorder (ADHD) according to the diagnostic criteria of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV; American Psychiatric…

  7. Statistical ecology comes of age

    PubMed Central

    Gimenez, Olivier; Buckland, Stephen T.; Morgan, Byron J. T.; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M.; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M.; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-01-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1–4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data. PMID:25540151

  8. Statistical ecology comes of age.

    PubMed

    Gimenez, Olivier; Buckland, Stephen T; Morgan, Byron J T; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-12-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1-4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data.

  9. Interpretations of the Patient-Therapist Relationship in Brief Dynamic Psychotherapy

    PubMed Central

    AMLO, SVEIN; ENGELSTAD, VIBEKE; FOSSUM, ARNE; SØRLIE, TORE; HØGLEND, PER; HEYERDAHL, OSCAR; SØRBYE, ØYSTEIN

    1993-01-01

    The authors examined whether persistent analysis of the patient-therapist relationship in brief dynamic psychotherapy favorably affects long-term dynamic change in patients initially deemed suitable for such treatment. As in common practice, 22 highly suitable patients were given a high number of transference interpretations per session. A comparison group of 21 patients with lower suitability received the same treatment, but transference interpretations were withheld. Statistical adjustment for the deliberate nonequivalence in pretreatment suitability indicated a significant negative effect of high numbers of transference interpretations on long-term dynamic changes. Demographic variables, DSM-III diagnoses, additional treatment, life events in the follow-up years, or therapist effects did not explain or obscure the findings. PMID:22700155

  10. Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Udey, Ruth Norma

    Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.

  11. The Role of the Sampling Distribution in Understanding Statistical Inference

    ERIC Educational Resources Information Center

    Lipson, Kay

    2003-01-01

    Many statistics educators believe that few students develop the level of conceptual understanding essential for them to apply correctly the statistical techniques at their disposal and to interpret their outcomes appropriately. It is also commonly believed that the sampling distribution plays an important role in developing this understanding.…

  12. GeneTools--application for functional annotation and statistical hypothesis testing.

    PubMed

    Beisvag, Vidar; Jünge, Frode K R; Bergum, Hallgeir; Jølsum, Lars; Lydersen, Stian; Günther, Clara-Cecilie; Ramampiaro, Heri; Langaas, Mette; Sandvik, Arne K; Laegreid, Astrid

    2006-10-24

    Modern biology has shifted from "one gene" approaches to methods for genomic-scale analysis like microarray technology, which allow simultaneous measurement of thousands of genes. This has created a need for tools facilitating interpretation of biological data in "batch" mode. However, such tools often leave the investigator with large volumes of apparently unorganized information. To meet this interpretation challenge, gene-set, or cluster testing has become a popular analytical tool. Many gene-set testing methods and software packages are now available, most of which use a variety of statistical tests to assess the genes in a set for biological information. However, the field is still evolving, and there is a great need for "integrated" solutions. GeneTools is a web-service providing access to a database that brings together information from a broad range of resources. The annotation data are updated weekly, guaranteeing that users get data most recently available. Data submitted by the user are stored in the database, where it can easily be updated, shared between users and exported in various formats. GeneTools provides three different tools: i) NMC Annotation Tool, which offers annotations from several databases like UniGene, Entrez Gene, SwissProt and GeneOntology, in both single- and batch search mode. ii) GO Annotator Tool, where users can add new gene ontology (GO) annotations to genes of interest. These user defined GO annotations can be used in further analysis or exported for public distribution. iii) eGOn, a tool for visualization and statistical hypothesis testing of GO category representation. As the first GO tool, eGOn supports hypothesis testing for three different situations (master-target situation, mutually exclusive target-target situation and intersecting target-target situation). An important additional function is an evidence-code filter that allows users, to select the GO annotations for the analysis. GeneTools is the first "all in one

  13. How the Mastery Rubric for Statistical Literacy Can Generate Actionable Evidence about Statistical and Quantitative Learning Outcomes

    ERIC Educational Resources Information Center

    Tractenberg, Rochelle E.

    2017-01-01

    Statistical literacy is essential to an informed citizenry; and two emerging trends highlight a growing need for training that achieves this literacy. The first trend is towards "big" data: while automated analyses can exploit massive amounts of data, the interpretation--and possibly more importantly, the replication--of results are…

  14. Statistical analysis of arthroplasty data

    PubMed Central

    2011-01-01

    It is envisaged that guidelines for statistical analysis and presentation of results will improve the quality and value of research. The Nordic Arthroplasty Register Association (NARA) has therefore developed guidelines for the statistical analysis of arthroplasty register data. The guidelines are divided into two parts, one with an introduction and a discussion of the background to the guidelines (Ranstam et al. 2011a, see pages x-y in this issue), and this one with a more technical statistical discussion on how specific problems can be handled. This second part contains (1) recommendations for the interpretation of methods used to calculate survival, (2) recommendations on howto deal with bilateral observations, and (3) a discussion of problems and pitfalls associated with analysis of factors that influence survival or comparisons between outcomes extracted from different hospitals. PMID:21619500

  15. Use of Statistical Analyses in the Ophthalmic Literature

    PubMed Central

    Lisboa, Renato; Meira-Freitas, Daniel; Tatham, Andrew J.; Marvasti, Amir H.; Sharpsten, Lucie; Medeiros, Felipe A.

    2014-01-01

    Purpose To identify the most commonly used statistical analyses in the ophthalmic literature and to determine the likely gain in comprehension of the literature that readers could expect if they were to sequentially add knowledge of more advanced techniques to their statistical repertoire. Design Cross-sectional study Methods All articles published from January 2012 to December 2012 in Ophthalmology, American Journal of Ophthalmology and Archives of Ophthalmology were reviewed. A total of 780 peer-reviewed articles were included. Two reviewers examined each article and assigned categories to each one depending on the type of statistical analyses used. Discrepancies between reviewers were resolved by consensus. Main Outcome Measures Total number and percentage of articles containing each category of statistical analysis were obtained. Additionally we estimated the accumulated number and percentage of articles that a reader would be expected to be able to interpret depending on their statistical repertoire. Results Readers with little or no statistical knowledge would be expected to be able to interpret the statistical methods presented in only 20.8% of articles. In order to understand more than half (51.4%) of the articles published, readers were expected to be familiar with at least 15 different statistical methods. Knowledge of 21 categories of statistical methods was necessary to comprehend 70.9% of articles, while knowledge of more than 29 categories was necessary to comprehend more than 90% of articles. Articles in retina and glaucoma subspecialties showed a tendency for using more complex analysis when compared to cornea. Conclusions Readers of clinical journals in ophthalmology need to have substantial knowledge of statistical methodology to understand the results of published studies in the literature. The frequency of use of complex statistical analyses also indicates that those involved in the editorial peer-review process must have sound statistical

  16. An 'electronic' extramural course in epidemiology and medical statistics.

    PubMed

    Ostbye, T

    1989-03-01

    This article describes an extramural university course in epidemiology and medical statistics taught using a computer conferencing system, microcomputers and data communications. Computer conferencing was shown to be a powerful, yet quite easily mastered, vehicle for distance education. It allows health personnel unable to attend regular classes due to geographical or time constraints, to take part in an interactive learning environment at low cost. This overcomes part of the intellectual and social isolation associated with traditional correspondence courses. Teaching of epidemiology and medical statistics is well suited to computer conferencing, even if the asynchronicity of the medium makes discussion of the most complex statistical concepts a little cumbersome. Computer conferencing may also prove to be a useful tool for teaching other medical and health related subjects.

  17. Ganymede - A relationship between thermal history and crater statistics

    NASA Technical Reports Server (NTRS)

    Phillips, R. J.; Malin, M. C.

    1980-01-01

    An approach for factoring the effects of a planetary thermal history into a predicted set of crater statistics for an icy satellite is developed and forms the basis for subsequent data inversion studies. The key parameter is a thermal evolution-dependent critical time for which craters of a particular size forming earlier do not contribute to present-day statistics. An example is given for the satellite Ganymede and the effect of the thermal history is easily seen in the resulting predicted crater statistics. A preliminary comparison with the data, subject to the uncertainties in ice rheology and impact flux history, suggests a surface age of 3.8 x 10 to the 9th years and a radionuclide abundance of 0.3 times the chondritic value.

  18. Statistical Hypothesis Testing in Intraspecific Phylogeography: NCPA versus ABC

    PubMed Central

    Templeton, Alan R.

    2009-01-01

    Nested clade phylogeographic analysis (NCPA) and approximate Bayesian computation (ABC) have been used to test phylogeographic hypotheses. Multilocus NCPA tests null hypotheses, whereas ABC discriminates among a finite set of alternatives. The interpretive criteria of NCPA are explicit and allow complex models to be built from simple components. The interpretive criteria of ABC are ad hoc and require the specification of a complete phylogeographic model. The conclusions from ABC are often influenced by implicit assumptions arising from the many parameters needed to specify a complex model. These complex models confound many assumptions so that biological interpretations are difficult. Sampling error is accounted for in NCPA, but ABC ignores important sources of sampling error that creates pseudo-statistical power. NCPA generates the full sampling distribution of its statistics, but ABC only yields local probabilities, which in turn make it impossible to distinguish between a good fitting model, a non-informative model, and an over-determined model. Both NCPA and ABC use approximations, but convergences of the approximations used in NCPA are well defined whereas those in ABC are not. NCPA can analyze a large number of locations, but ABC cannot. Finally, the dimensionality of tested hypothesis is known in NCPA, but not for ABC. As a consequence, the “probabilities” generated by ABC are not true probabilities and are statistically non-interpretable. Accordingly, ABC should not be used for hypothesis testing, but simulation approaches are valuable when used in conjunction with NCPA or other methods that do not rely on highly parameterized models. PMID:19192182

  19. Statistics Translated: A Step-by-Step Guide to Analyzing and Interpreting Data

    ERIC Educational Resources Information Center

    Terrell, Steven R.

    2012-01-01

    Written in a humorous and encouraging style, this text shows how the most common statistical tools can be used to answer interesting real-world questions, presented as mysteries to be solved. Engaging research examples lead the reader through a series of six steps, from identifying a researchable problem to stating a hypothesis, identifying…

  20. Analysis and Interpretation of Findings Using Multiple Regression Techniques

    ERIC Educational Resources Information Center

    Hoyt, William T.; Leierer, Stephen; Millington, Michael J.

    2006-01-01

    Multiple regression and correlation (MRC) methods form a flexible family of statistical techniques that can address a wide variety of different types of research questions of interest to rehabilitation professionals. In this article, we review basic concepts and terms, with an emphasis on interpretation of findings relevant to research questions…

  1. Degree-based statistic and center persistency for brain connectivity analysis.

    PubMed

    Yoo, Kwangsun; Lee, Peter; Chung, Moo K; Sohn, William S; Chung, Sun Ju; Na, Duk L; Ju, Daheen; Jeong, Yong

    2017-01-01

    Brain connectivity analyses have been widely performed to investigate the organization and functioning of the brain, or to observe changes in neurological or psychiatric conditions. However, connectivity analysis inevitably introduces the problem of mass-univariate hypothesis testing. Although, several cluster-wise correction methods have been suggested to address this problem and shown to provide high sensitivity, these approaches fundamentally have two drawbacks: the lack of spatial specificity (localization power) and the arbitrariness of an initial cluster-forming threshold. In this study, we propose a novel method, degree-based statistic (DBS), performing cluster-wise inference. DBS is designed to overcome the above-mentioned two shortcomings. From a network perspective, a few brain regions are of critical importance and considered to play pivotal roles in network integration. Regarding this notion, DBS defines a cluster as a set of edges of which one ending node is shared. This definition enables the efficient detection of clusters and their center nodes. Furthermore, a new measure of a cluster, center persistency (CP) was introduced. The efficiency of DBS with a known "ground truth" simulation was demonstrated. Then they applied DBS to two experimental datasets and showed that DBS successfully detects the persistent clusters. In conclusion, by adopting a graph theoretical concept of degrees and borrowing the concept of persistence from algebraic topology, DBS could sensitively identify clusters with centric nodes that would play pivotal roles in an effect of interest. DBS is potentially widely applicable to variable cognitive or clinical situations and allows us to obtain statistically reliable and easily interpretable results. Hum Brain Mapp 38:165-181, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. Assay Design Affects the Interpretation of T-Cell Receptor Gamma Gene Rearrangements

    PubMed Central

    Cushman-Vokoun, Allison M.; Connealy, Solomon; Greiner, Timothy C.

    2010-01-01

    Interpretation of capillary electrophoresis results derived from multiplexed fluorochrome-labeled primer sets can be complicated by small peaks, which may be incorrectly interpreted as clonal T-cell receptor-γ gene rearrangements. In this report, different assay designs were used to illustrate how design may adversely affect specificity. Ten clinical cases, with subclonal peaks containing one of the two infrequently used joining genes, were identified with a tri-color, one-tube assay. The DNA was amplified with the same NED fluorochrome on all three joining primers, first combined (one-color assay) and then amplified separately using a single NED-labeled joining primer. The single primer assay design shows how insignificant peaks could easily be wrongly interpreted as clonal T-cell receptor-γ gene rearrangements. Next, the performance of the one-tube assay was compared with the two-tube BIOMED-2-based TCRG Gene Clonality Assay in a series of 44 cases. Whereas sensitivity was similar between the two methods (92.9% vs. 96.4%; P = 0.55), specificity was significantly less in the BIOMED-2 assay (87.5% vs. 56.3%; P = 0.049) when a 2× ratio was used to define clonality. Specificity was improved to 81.3% by the use of a 5× peak height ratio (P = 0.626). These findings illustrate how extra caution is needed in interpreting a design with multiple, separate distributions, which is more difficult to interpret than a single distribution assay. PMID:20959612

  3. Biophotons, coherence and photocount statistics: A critical review

    NASA Astrophysics Data System (ADS)

    Cifra, Michal; Brouder, Christian; Nerudová, Michaela; Kučera, Ondřej

    2015-08-01

    Biological samples continuously emit ultra-weak photon emission (UPE, or "biophotons") which stems from electronic excited states generated chemically during oxidative metabolism and stress. Thus, UPE can potentially serve as a method for non-invasive diagnostics of oxidative processes or, if discovered, also of other processes capable of electron excitation. While the fundamental generating mechanisms of UPE are fairly elucidated together with their approximate ranges of intensities and spectra, statistical properties of UPE is still a highly challenging topic. Here we review claims about nontrivial statistical properties of UPE, such as coherence and squeezed states of light. After introduction to the necessary theory, we categorize the experimental works of all authors to those with solid, conventional interpretation and those with unconventional and even speculative interpretation. The conclusion of our review is twofold; while the phenomenon of UPE from biological systems can be considered experimentally well established, no reliable evidence for the coherence or nonclassicality of UPE was actually achieved up to now. Furthermore, we propose perspective avenues in the research of statistical properties of biological UPE.

  4. Surveys Assessing Students' Attitudes toward Statistics: A Systematic Review of Validity and Reliability

    ERIC Educational Resources Information Center

    Nolan, Meaghan M.; Beran, Tanya; Hecker, Kent G.

    2012-01-01

    Students with positive attitudes toward statistics are likely to show strong academic performance in statistics courses. Multiple surveys measuring students' attitudes toward statistics exist; however, a comparison of the validity and reliability of interpretations based on their scores is needed. A systematic review of relevant electronic…

  5. Philosophical perspectives on quantum chaos: Models and interpretations

    NASA Astrophysics Data System (ADS)

    Bokulich, Alisa Nicole

    2001-09-01

    The problem of quantum chaos is a special case of the larger problem of understanding how the classical world emerges from quantum mechanics. While we have learned that chaos is pervasive in classical systems, it appears to be almost entirely absent in quantum systems. The aim of this dissertation is to determine what implications the interpretation of quantum mechanics has for attempts to explain the emergence of classical chaos. There are three interpretations of quantum mechanics that have set out programs for solving the problem of quantum chaos: the standard interpretation, the statistical interpretation, and the deBroglie-Bohm causal interpretation. One of the main conclusions of this dissertation is that an interpretation alone is insufficient for solving the problem of quantum chaos and that the phenomenon of decoherence must be taken into account. Although a completely satisfactory solution of the problem of quantum chaos is still outstanding, I argue that the deBroglie-Bohm interpretation with the help of decoherence outlines the most promising research program to pursue. In addition to making a contribution to the debate in the philosophy of physics concerning the interpretation of quantum mechanics, this dissertation reveals two important methodological lessons for the philosophy of science. First, issues of reductionism and intertheoretic relations cannot be divorced from questions concerning the interpretation of the theories involved. Not only is the exploration of intertheoretic relations a central part of the articulation and interpretation of an individual theory, but the very terms used to discuss intertheoretic relations, such as `state' and `classical limit', are themselves defined by particular interpretations of the theory. The second lesson that emerges is that, when it comes to characterizing the relationship between classical chaos and quantum mechanics, the traditional approaches to intertheoretic relations, namely reductionism and

  6. Interpreters, Interpreting, and the Study of Bilingualism.

    ERIC Educational Resources Information Center

    Valdes, Guadalupe; Angelelli, Claudia

    2003-01-01

    Discusses research on interpreting focused specifically on issues raised by this literature about the nature of bilingualism. Suggests research carried out on interpreting--while primarily produced with a professional audience in mind and concerned with improving the practice of interpreting--provides valuable insights about complex aspects of…

  7. Statistical Signal Process in R Language in the Pharmacovigilance Programme of India.

    PubMed

    Kumar, Aman; Ahuja, Jitin; Shrivastava, Tarani Prakash; Kumar, Vipin; Kalaiselvan, Vivekanandan

    2018-05-01

    The Ministry of Health & Family Welfare, Government of India, initiated the Pharmacovigilance Programme of India (PvPI) in July 2010. The purpose of the PvPI is to collect data on adverse reactions due to medications, analyze it, and use the reference to recommend informed regulatory intervention, besides communicating the risk to health care professionals and the public. The goal of the present study was to apply statistical tools to find the relationship between drugs and ADRs for signal detection by R programming. Four statistical parameters were proposed for quantitative signal detection. These 4 parameters are IC 025 , PRR and PRR lb , chi-square, and N 11 ; we calculated these 4 values using R programming. We analyzed 78,983 drug-ADR combinations, and the total count of drug-ADR combination was 4,20,060. During the calculation of the statistical parameter, we use 3 variables: (1) N 11 (number of counts), (2) N 1. (Drug margin), and (3) N .1 (ADR margin). The structure and calculation of these 4 statistical parameters in R language are easily understandable. On the basis of the IC value (IC value >0), out of the 78,983 drug-ADR combination (drug-ADR combination), we found the 8,667 combinations to be significantly associated. The calculation of statistical parameters in R language is time saving and allows to easily identify new signals in the Indian ICSR (Individual Case Safety Reports) database.

  8. Use of check lists in assessing the statistical content of medical studies.

    PubMed Central

    Gardner, M J; Machin, D; Campbell, M J

    1986-01-01

    Two check lists are used routinely in the statistical assessment of manuscripts submitted to the "BMJ." One is for papers of a general nature and the other specifically for reports on clinical trials. Each check list includes questions on the design, conduct, analysis, and presentation of studies, and answers to these contribute to the overall statistical evaluation. Only a small proportion of submitted papers are assessed statistically, and these are selected at the refereeing or editorial stage. Examination of the use of the check lists showed that most papers contained statistical failings, many of which could easily be remedied. It is recommended that the check lists should be used by statistical referees, editorial staff, and authors and also during the design stage of studies. PMID:3082452

  9. Design, analysis, and interpretation of field quality-control data for water-sampling projects

    USGS Publications Warehouse

    Mueller, David K.; Schertz, Terry L.; Martin, Jeffrey D.; Sandstrom, Mark W.

    2015-01-01

    The report provides extensive information about statistical methods used to analyze quality-control data in order to estimate potential bias and variability in environmental data. These methods include construction of confidence intervals on various statistical measures, such as the mean, percentiles and percentages, and standard deviation. The methods are used to compare quality-control results with the larger set of environmental data in order to determine whether the effects of bias and variability might interfere with interpretation of these data. Examples from published reports are presented to illustrate how the methods are applied, how bias and variability are reported, and how the interpretation of environmental data can be qualified based on the quality-control analysis.

  10. “Magnitude-based Inference”: A Statistical Review

    PubMed Central

    Welsh, Alan H.; Knight, Emma J.

    2015-01-01

    ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387

  11. On the interpretations of Langevin stochastic equation in different coordinate systems

    NASA Astrophysics Data System (ADS)

    Martínez, E.; López-Díaz, L.; Torres, L.; Alejos, O.

    2004-01-01

    The stochastic Langevin Landau-Lifshitz equation is usually utilized in micromagnetics formalism to account for thermal effects. Commonly, two different interpretations of the stochastic integrals can be made: Ito and Stratonovich. In this work, the Langevin-Landau-Lifshitz (LLL) equation is written in both Cartesian and Spherical coordinates. If Spherical coordinates are employed, the noise is additive, and therefore, Ito and Stratonovich solutions are equal. This is not the case when (LLL) equation is written in Cartesian coordinates. In this case, the Langevin equation must be interpreted in the Stratonovich sense in order to reproduce correct statistical results. Nevertheless, the statistics of the numerical results obtained from Euler-Ito and Euler-Stratonovich schemes are equivalent due to the additional numerical constraint imposed in Cartesian system after each time step, which itself assures that the magnitude of the magnetization is preserved.

  12. Understanding regulatory networks requires more than computing a multitude of graph statistics. Comment on "Drivers of structural features in gene regulatory networks: From biophysical constraints to biological function" by O.C. Martin et al.

    NASA Astrophysics Data System (ADS)

    Tkačik, Gašper

    2016-07-01

    The article by O. Martin and colleagues provides a much needed systematic review of a body of work that relates the topological structure of genetic regulatory networks to evolutionary selection for function. This connection is very important. Using the current wealth of genomic data, statistical features of regulatory networks (e.g., degree distributions, motif composition, etc.) can be quantified rather easily; it is, however, often unclear how to interpret the results. On a graph theoretic level the statistical significance of the results can be evaluated by comparing observed graphs to ;randomized; ones (bravely ignoring the issue of how precisely to randomize!) and comparing the frequency of appearance of a particular network structure relative to a randomized null expectation. While this is a convenient operational test for statistical significance, its biological meaning is questionable. In contrast, an in-silico genotype-to-phenotype model makes explicit the assumptions about the network function, and thus clearly defines the expected network structures that can be compared to the case of no selection for function and, ultimately, to data.

  13. Pattern statistics on Markov chains and sensitivity to parameter estimation.

    PubMed

    Nuel, Grégory

    2006-10-17

    In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of sigma, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.

  14. Improving the effectiveness of ecological site descriptions: General state-and-transition models and the Ecosystem Dynamics Interpretive Tool (EDIT)

    USGS Publications Warehouse

    Bestelmeyer, Brandon T.; Williamson, Jeb C.; Talbot, Curtis J.; Cates, Greg W.; Duniway, Michael C.; Brown, Joel R.

    2016-01-01

    State-and-transition models (STMs) are useful tools for management, but they can be difficult to use and have limited content.STMs created for groups of related ecological sites could simplify and improve their utility. The amount of information linked to models can be increased using tables that communicate management interpretations and important within-group variability.We created a new web-based information system (the Ecosystem Dynamics Interpretive Tool) to house STMs, associated tabular information, and other ecological site data and descriptors.Fewer, more informative, better organized, and easily accessible STMs should increase the accessibility of science information.

  15. Toward Establishing the Validity of the Resource Interpreter's Self-Efficacy Instrument

    NASA Astrophysics Data System (ADS)

    Smith, Grant D.

    Interpretive rangers serve as one of the major educational resources that visitors may encounter during their visit to a park or other natural area, yet our understanding of their professional growth remains limited. This study helps address this issue by developing an instrument that evaluates the beliefs of resource interpreters regarding their capabilities of communicating with the public. The resulting 11-item instrument was built around the construct of Albert Bandura's self-efficacy theory (Bandura, 1977, 1986, 1997), used guidelines and principles developed over the course of 30 years of teacher efficacy studies (Bandura, 2006; Gibson & Dembo, 1984; Riggs & Enochs, 1990; Tschannen-Moran & Hoy, 2001; Tschannen-Moran, Hoy, & Hoy, 1998), and probed areas of challenge that are unique to the demands of resource interpretation (Brochu & Merriman, 2002; Ham, 1992; Knudson, Cable, & Beck, 2003; Larsen, 2003; Tilden, 1977). A voluntary convenience sample of 364 National Park Service rangers was collected in order to conduct the statistical analyses needed to winnow the draft instrument down from 47 items in its original form to 11 items in its final state. Statistical analyses used in this process included item-total correlation, index of discrimination, exploratory factor analysis, and confirmatory factor analysis.

  16. Easily constructed spectroelectrochemical cell for batch and flow injection analyses.

    PubMed

    Flowers, Paul A; Maynor, Margaret A; Owens, Donald E

    2002-02-01

    The design and performance of an easily constructed spectroelectrochemical cell suitable for batch and flow injection measurements are described. The cell is fabricated from a commercially available 5-mm quartz cuvette and employs 60 ppi reticulated vitreous carbon as the working electrode, resulting in a reasonable compromise between optical sensitivity and thin-layer electrochemical behavior. The spectroelectrochemical traits of the cell in both batch and flow modes were evaluated using aqueous ferricyanide and compare favorably to those reported previously for similar cells.

  17. Crunching Numbers: What Cancer Screening Statistics Really Tell Us

    Cancer.gov

    Cancer screening studies have shown that more screening does not necessarily translate into fewer cancer deaths. This article explains how to interpret the statistics used to describe the results of screening studies.

  18. Toward smartphone applications for geoparks information and interpretation systems in China

    NASA Astrophysics Data System (ADS)

    Li, Qian; Tian, Mingzhong; Li, Xingle; Shi, Yihua; Zhou, Xu

    2015-11-01

    Geopark information and interpretation systems are both necessary infrastructure in geopark planning and construction program, and they are also essential for geoeducation and geoconservation in geopark tourism. The current state and development of information and interpretation systems in China's geoparks were presented and analyzed in this paper. Statistics showed that fewer than half of geoparks run websites, and less than that amount maintained database, and less than one percent of all Internet/smartphone applications were used for geopark tourism. The results of our analysis indicated that smartphone applications in geopark information and interpretation systems would provide benefits such as accelerated geopark science popularization and education and facilitated interactive communication between geoparks and tourists.

  19. Vocational students' learning preferences: the interpretability of ipsative data.

    PubMed

    Smith, P J

    2000-02-01

    A number of researchers have argued that ipsative data are not suitable for statistical procedures designed for normative data. Others have argued that the interpretability of such analyses of ipsative data are little affected where the number of variables and the sample size are sufficiently large. The research reported here represents a factor analysis of the scores on the Canfield Learning Styles Inventory for 1,252 students in vocational education. The results of the factor analysis of these ipsative data were examined in a context of existing theory and research on vocational students and lend support to the argument that the factor analysis of ipsative data can provide sensibly interpretable results.

  20. Statistical analysis of water-quality data containing multiple detection limits: S-language software for regression on order statistics

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2005-01-01

    Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.

  1. Comparison of interpretation methods of thermocouple psychrometer readouts

    NASA Astrophysics Data System (ADS)

    Guz, Łukasz; Majerek, Dariusz; Sobczuk, Henryk; Guz, Ewa; Połednik, Bernard

    2017-07-01

    Thermocouple psychrometers allow to determine the water potential, which can be easily recalculated into relative humidity of air in cavity of porous materials. The available typical measuring range of probe is very narrow. The lower limit of water potential measurements is about -200 kPa. On the other hand, the upper limit is approximately equal to -7000 kPa and depends on many factors. These paper presents a comparison of two interpretation methods of thermocouple microvolt output regarding: i) amplitude of voltage during wet-bulb temperature depression, ii) field under microvolt output curve. Previous results of experiments indicate that there is a robust correlation between water potential and field under microvolt output curve. In order to obtain correct results of water potential, each probe should be calibrated. The range of NaCl salt solutions with molality from 0.75M to 2.25M was used for calibration, which enable to obtain the osmotic potential from -3377 kPa to -10865 kPa. During measurements was applied 5mA heating current over a span 5 s and 5 mA cooling current aver a span 30s. The conducted study proves that using only different interpretation method based on field under microvolt output it is possible to achieve about 1000 kPa wider range of water potential. The average relative mean square error (RMSE) of this interpretation method is 1199 kPa while voltage amplitude based method yields average RMSE equaling 1378 kPa during calibration in temperature not stabilized conditions.

  2. Interpreting Meta-Analyses of Genome-Wide Association Studies

    PubMed Central

    Han, Buhm; Eskin, Eleazar

    2012-01-01

    Meta-analysis is an increasingly popular tool for combining multiple genome-wide association studies in a single analysis to identify associations with small effect sizes. The effect sizes between studies in a meta-analysis may differ and these differences, or heterogeneity, can be caused by many factors. If heterogeneity is observed in the results of a meta-analysis, interpreting the cause of heterogeneity is important because the correct interpretation can lead to a better understanding of the disease and a more effective design of a replication study. However, interpreting heterogeneous results is difficult. The standard approach of examining the association p-values of the studies does not effectively predict if the effect exists in each study. In this paper, we propose a framework facilitating the interpretation of the results of a meta-analysis. Our framework is based on a new statistic representing the posterior probability that the effect exists in each study, which is estimated utilizing cross-study information. Simulations and application to the real data show that our framework can effectively segregate the studies predicted to have an effect, the studies predicted to not have an effect, and the ambiguous studies that are underpowered. In addition to helping interpretation, the new framework also allows us to develop a new association testing procedure taking into account the existence of effect. PMID:22396665

  3. Does Training in Table Creation Enhance Table Interpretation? A Quasi-Experimental Study with Follow-Up

    ERIC Educational Resources Information Center

    Karazsia, Bryan T.; Wong, Kendal

    2016-01-01

    Quantitative and statistical literacy are core domains in the undergraduate psychology curriculum. An important component of such literacy includes interpretation of visual aids, such as tables containing results from statistical analyses. This article presents results of a quasi-experimental study with longitudinal follow-up that tested the…

  4. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial

  5. An easily regenerable enzyme reactor prepared from polymerized high internal phase emulsions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruan, Guihua, E-mail: guihuaruan@hotmail.com; Guangxi Collaborative Innovation Center for Water Pollution Control and Water Safety in Karst Area, Guilin University of Technology, Guilin 541004; Wu, Zhenwei

    A large-scale high-efficient enzyme reactor based on polymerized high internal phase emulsion monolith (polyHIPE) was prepared. First, a porous cross-linked polyHIPE monolith was prepared by in-situ thermal polymerization of a high internal phase emulsion containing styrene, divinylbenzene and polyglutaraldehyde. The enzyme of TPCK-Trypsin was then immobilized on the monolithic polyHIPE. The performance of the resultant enzyme reactor was assessed according to the conversion ability of N{sub α}-benzoyl-L-arginine ethyl ester to N{sub α}-benzoyl-L-arginine, and the protein digestibility of bovine serum albumin (BSA) and cytochrome (Cyt-C). The results showed that the prepared enzyme reactor exhibited high enzyme immobilization efficiency and fast andmore » easy-control protein digestibility. BSA and Cyt-C could be digested in 10 min with sequence coverage of 59% and 78%, respectively. The peptides and residual protein could be easily rinsed out from reactor and the reactor could be regenerated easily with 4 M HCl without any structure destruction. Properties of multiple interconnected chambers with good permeability, fast digestion facility and easily reproducibility indicated that the polyHIPE enzyme reactor was a good selector potentially applied in proteomics and catalysis areas. - Graphical abstract: Schematic illustration of preparation of hypercrosslinking polyHIPE immobilized enzyme reactor for on-column protein digestion. - Highlights: • A reactor was prepared and used for enzyme immobilization and continuous on-column protein digestion. • The new polyHIPE IMER was quite suit for protein digestion with good properties. • On-column digestion revealed that the IMER was easy regenerated by HCl without any structure destruction.« less

  6. Boyle temperature as a point of ideal gas in gentile statistics and its economic interpretation

    NASA Astrophysics Data System (ADS)

    Maslov, V. P.; Maslova, T. V.

    2014-07-01

    Boyle temperature is interpreted as the temperature at which the formation of dimers becomes impossible. To Irving Fisher's correspondence principle we assign two more quantities: the number of degrees of freedom, and credit. We determine the danger level of the mass of money M when the mutual trust between economic agents begins to fall.

  7. Critical Views of 8th Grade Students toward Statistical Data in Newspaper Articles: Analysis in Light of Statistical Literacy

    ERIC Educational Resources Information Center

    Guler, Mustafa; Gursoy, Kadir; Guven, Bulent

    2016-01-01

    Understanding and interpreting biased data, decision-making in accordance with the data, and critically evaluating situations involving data are among the fundamental skills necessary in the modern world. To develop these required skills, emphasis on statistical literacy in school mathematics has been gradually increased in recent years. The…

  8. Looking at tardigrades in a new light: using epifluorescence to interpret structure.

    PubMed

    Perry, E S; Miller, W R; Lindsay, S

    2015-02-01

    The use of epifluorescence microscopy coupled with ultraviolet (UV) autofluorescence is suggested as a means to view and interpret tardigrade structures. Endogenous fluorochromes are a known component of tardigrade cuticle, claws and bucco-pharyngeal apparatus. By imaging the autofluorescence from tardigrades, it is possible to document these structures in detail, including the subdivisions and boundaries of echiniscid (heterotardigrade) plates and the nature and spatial relationships of the texture (pores, granules, papillae and tubercles) on the various plates. This allows the determination of taxonomic features not easily seen with other microscopic techniques. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.

  9. Pattern statistics on Markov chains and sensitivity to parameter estimation

    PubMed Central

    Nuel, Grégory

    2006-01-01

    Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). Results: In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation. PMID:17044916

  10. Plasmonic Films Can Easily Be Better: Rules and Recipes

    PubMed Central

    2015-01-01

    High-quality materials are critical for advances in plasmonics, especially as researchers now investigate quantum effects at the limit of single surface plasmons or exploit ultraviolet- or CMOS-compatible metals such as aluminum or copper. Unfortunately, due to inexperience with deposition methods, many plasmonics researchers deposit metals under the wrong conditions, severely limiting performance unnecessarily. This is then compounded as others follow their published procedures. In this perspective, we describe simple rules collected from the surface-science literature that allow high-quality plasmonic films of aluminum, copper, gold, and silver to be easily deposited with commonly available equipment (a thermal evaporator). Recipes are also provided so that films with optimal optical properties can be routinely obtained. PMID:25950012

  11. Using recurrence plot analysis for software execution interpretation and fault detection

    NASA Astrophysics Data System (ADS)

    Mosdorf, M.

    2015-09-01

    This paper shows a method targeted at software execution interpretation and fault detection using recurrence plot analysis. In in the proposed approach recurrence plot analysis is applied to software execution trace that contains executed assembly instructions. Results of this analysis are subject to further processing with PCA (Principal Component Analysis) method that simplifies number coefficients used for software execution classification. This method was used for the analysis of five algorithms: Bubble Sort, Quick Sort, Median Filter, FIR, SHA-1. Results show that some of the collected traces could be easily assigned to particular algorithms (logs from Bubble Sort and FIR algorithms) while others are more difficult to distinguish.

  12. Statistical Performances of Resistive Active Power Splitter

    NASA Astrophysics Data System (ADS)

    Lalléchère, Sébastien; Ravelo, Blaise; Thakur, Atul

    2016-03-01

    In this paper, the synthesis and sensitivity analysis of an active power splitter (PWS) is proposed. It is based on the active cell composed of a Field Effect Transistor in cascade with shunted resistor at the input and the output (resistive amplifier topology). The PWS uncertainty versus resistance tolerances is suggested by using stochastic method. Furthermore, with the proposed topology, we can control easily the device gain while varying a resistance. This provides useful tool to analyse the statistical sensitivity of the system in uncertain environment.

  13. Statistical considerations in the development of injury risk functions.

    PubMed

    McMurry, Timothy L; Poplin, Gerald S

    2015-01-01

    We address 4 frequently misunderstood and important statistical ideas in the construction of injury risk functions. These include the similarities of survival analysis and logistic regression, the correct scale on which to construct pointwise confidence intervals for injury risk, the ability to discern which form of injury risk function is optimal, and the handling of repeated tests on the same subject. The statistical models are explored through simulation and examination of the underlying mathematics. We provide recommendations for the statistically valid construction and correct interpretation of single-predictor injury risk functions. This article aims to provide useful and understandable statistical guidance to improve the practice in constructing injury risk functions.

  14. Statistical analogues of thermodynamic extremum principles

    NASA Astrophysics Data System (ADS)

    Ramshaw, John D.

    2018-05-01

    As shown by Jaynes, the canonical and grand canonical probability distributions of equilibrium statistical mechanics can be simply derived from the principle of maximum entropy, in which the statistical entropy S=- {k}{{B}}{\\sum }i{p}i{log}{p}i is maximised subject to constraints on the mean values of the energy E and/or number of particles N in a system of fixed volume V. The Lagrange multipliers associated with those constraints are then found to be simply related to the temperature T and chemical potential μ. Here we show that the constrained maximisation of S is equivalent to, and can therefore be replaced by, the essentially unconstrained minimisation of the obvious statistical analogues of the Helmholtz free energy F = E ‑ TS and the grand potential J = F ‑ μN. Those minimisations are more easily performed than the maximisation of S because they formally eliminate the constraints on the mean values of E and N and their associated Lagrange multipliers. This procedure significantly simplifies the derivation of the canonical and grand canonical probability distributions, and shows that the well known extremum principles for the various thermodynamic potentials possess natural statistical analogues which are equivalent to the constrained maximisation of S.

  15. Interpreting Abstract Interpretations in Membership Equational Logic

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Rosu, Grigore

    2001-01-01

    We present a logical framework in which abstract interpretations can be naturally specified and then verified. Our approach is based on membership equational logic which extends equational logics by membership axioms, asserting that a term has a certain sort. We represent an abstract interpretation as a membership equational logic specification, usually as an overloaded order-sorted signature with membership axioms. It turns out that, for any term, its least sort over this specification corresponds to its most concrete abstract value. Maude implements membership equational logic and provides mechanisms to calculate the least sort of a term efficiently. We first show how Maude can be used to get prototyping of abstract interpretations "for free." Building on the meta-logic facilities of Maude, we further develop a tool that automatically checks and abstract interpretation against a set of user-defined properties. This can be used to select an appropriate abstract interpretation, to characterize the specified loss of information during abstraction, and to compare different abstractions with each other.

  16. Introduction of a Journal Excerpt Activity Improves Undergraduate Students' Performance in Statistics

    ERIC Educational Resources Information Center

    Rabin, Laura A.; Nutter-Upham, Katherine E.

    2010-01-01

    We describe an active learning exercise intended to improve undergraduate students' understanding of statistics by grounding complex concepts within a meaningful, applied context. Students in a journal excerpt activity class read brief excerpts of statistical reporting from published research articles, answered factual and interpretive questions,…

  17. Tools to Support Interpreting Multiple Regression in the Face of Multicollinearity

    PubMed Central

    Kraha, Amanda; Turner, Heather; Nimon, Kim; Zientek, Linda Reichwein; Henson, Robin K.

    2012-01-01

    While multicollinearity may increase the difficulty of interpreting multiple regression (MR) results, it should not cause undue problems for the knowledgeable researcher. In the current paper, we argue that rather than using one technique to investigate regression results, researchers should consider multiple indices to understand the contributions that predictors make not only to a regression model, but to each other as well. Some of the techniques to interpret MR effects include, but are not limited to, correlation coefficients, beta weights, structure coefficients, all possible subsets regression, commonality coefficients, dominance weights, and relative importance weights. This article will review a set of techniques to interpret MR effects, identify the elements of the data on which the methods focus, and identify statistical software to support such analyses. PMID:22457655

  18. Tools to support interpreting multiple regression in the face of multicollinearity.

    PubMed

    Kraha, Amanda; Turner, Heather; Nimon, Kim; Zientek, Linda Reichwein; Henson, Robin K

    2012-01-01

    While multicollinearity may increase the difficulty of interpreting multiple regression (MR) results, it should not cause undue problems for the knowledgeable researcher. In the current paper, we argue that rather than using one technique to investigate regression results, researchers should consider multiple indices to understand the contributions that predictors make not only to a regression model, but to each other as well. Some of the techniques to interpret MR effects include, but are not limited to, correlation coefficients, beta weights, structure coefficients, all possible subsets regression, commonality coefficients, dominance weights, and relative importance weights. This article will review a set of techniques to interpret MR effects, identify the elements of the data on which the methods focus, and identify statistical software to support such analyses.

  19. QUANTIFICATION AND INTERPRETATION OF TOTAL PETROLEUM HYDROCARBONS IN SEDIMENT SAMPLES BY A GC/MS METHOD AND COMPARISON WITH EPA 418.1 AND A RAPID FIELD METHOD

    EPA Science Inventory

    ABSTRACT: Total Petroleum hydrocarbons (TPH) as a lumped parameter can be easily and rapidly measured or monitored. Despite interpretational problems, it has become an accepted regulatory benchmark used widely to evaluate the extent of petroleum product contamination. Three cu...

  20. Primer of statistics in dental research: part I.

    PubMed

    Shintani, Ayumi

    2014-01-01

    Statistics play essential roles in evidence-based dentistry (EBD) practice and research. It ranges widely from formulating scientific questions, designing studies, collecting and analyzing data to interpreting, reporting, and presenting study findings. Mastering statistical concepts appears to be an unreachable goal among many dental researchers in part due to statistical authorities' limitations of explaining statistical principles to health researchers without elaborating complex mathematical concepts. This series of 2 articles aim to introduce dental researchers to 9 essential topics in statistics to conduct EBD with intuitive examples. The part I of the series includes the first 5 topics (1) statistical graph, (2) how to deal with outliers, (3) p-value and confidence interval, (4) testing equivalence, and (5) multiplicity adjustment. Part II will follow to cover the remaining topics including (6) selecting the proper statistical tests, (7) repeated measures analysis, (8) epidemiological consideration for causal association, and (9) analysis of agreement. Copyright © 2014. Published by Elsevier Ltd.

  1. Comparative interpretations of renormalization inversion technique for reconstructing unknown emissions from measured atmospheric concentrations

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory

    2017-04-01

    The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.

  2. mvMapper: statistical and geographical data exploration and visualization of multivariate analysis of population structure

    USDA-ARS?s Scientific Manuscript database

    Characterizing population genetic structure across geographic space is a fundamental challenge in population genetics. Multivariate statistical analyses are powerful tools for summarizing genetic variability, but geographic information and accompanying metadata is not always easily integrated into t...

  3. Common Scientific and Statistical Errors in Obesity Research

    PubMed Central

    George, Brandon J.; Beasley, T. Mark; Brown, Andrew W.; Dawson, John; Dimova, Rositsa; Divers, Jasmin; Goldsby, TaShauna U.; Heo, Moonseong; Kaiser, Kathryn A.; Keith, Scott; Kim, Mimi Y.; Li, Peng; Mehta, Tapan; Oakes, J. Michael; Skinner, Asheley; Stuart, Elizabeth; Allison, David B.

    2015-01-01

    We identify 10 common errors and problems in the statistical analysis, design, interpretation, and reporting of obesity research and discuss how they can be avoided. The 10 topics are: 1) misinterpretation of statistical significance, 2) inappropriate testing against baseline values, 3) excessive and undisclosed multiple testing and “p-value hacking,” 4) mishandling of clustering in cluster randomized trials, 5) misconceptions about nonparametric tests, 6) mishandling of missing data, 7) miscalculation of effect sizes, 8) ignoring regression to the mean, 9) ignoring confirmation bias, and 10) insufficient statistical reporting. We hope that discussion of these errors can improve the quality of obesity research by helping researchers to implement proper statistical practice and to know when to seek the help of a statistician. PMID:27028280

  4. Statistical properties and correlation functions for drift waves

    NASA Technical Reports Server (NTRS)

    Horton, W.

    1986-01-01

    The dissipative one-field drift wave equation is solved using the pseudospectral method to generate steady-state fluctuations. The fluctuations are analyzed in terms of space-time correlation functions and modal probability distributions. Nearly Gaussian statistics and exponential decay of the two-time correlation functions occur in the presence of electron dissipation, while in the absence of electron dissipation long-lived vortical structures occur. Formulas from renormalized, Markovianized statistical turbulence theory are given in a local approximation to interpret the dissipative turbulence.

  5. Converting analog interpretive data to digital formats for use in database and GIS applications

    USGS Publications Warehouse

    Flocks, James G.

    2004-01-01

    There is a growing need by researchers and managers for comprehensive and unified nationwide datasets of scientific data. These datasets must be in a digital format that is easily accessible using database and GIS applications, providing the user with access to a wide variety of current and historical information. Although most data currently being collected by scientists are already in a digital format, there is still a large repository of information in the literature and paper archive. Converting this information into a format accessible by computer applications is typically very difficult and can result in loss of data. However, since scientific data are commonly collected in a repetitious, concise matter (i.e., forms, tables, graphs, etc.), these data can be recovered digitally by using a conversion process that relates the position of an attribute in two-dimensional space to the information that the attribute signifies. For example, if a table contains a certain piece of information in a specific row and column, then the space that the row and column occupies becomes an index of that information. An index key is used to identify the relation between the physical location of the attribute and the information the attribute contains. The conversion process can be achieved rapidly, easily and inexpensively using widely available digitizing and spreadsheet software, and simple programming code. In the geological sciences, sedimentary character is commonly interpreted from geophysical profiles and descriptions of sediment cores. In the field and laboratory, these interpretations were typically transcribed to paper. The information from these paper archives is still relevant and increasingly important to scientists, engineers and managers to understand geologic processes affecting our environment. Direct scanning of this information produces a raster facsimile of the data, which allows it to be linked to the electronic world. But true integration of the content with

  6. An AAA-DDD triply hydrogen-bonded complex easily accessible for supramolecular polymers.

    PubMed

    Han, Yi-Fei; Chen, Wen-Qiang; Wang, Hong-Bo; Yuan, Ying-Xue; Wu, Na-Na; Song, Xiang-Zhi; Yang, Lan

    2014-12-15

    For a complementary hydrogen-bonded complex, when every hydrogen-bond acceptor is on one side and every hydrogen-bond donor is on the other, all secondary interactions are attractive and the complex is highly stable. AAA-DDD (A=acceptor, D=donor) is considered to be the most stable among triply hydrogen-bonded sequences. The easily synthesized and further derivatized AAA-DDD system is very desirable for hydrogen-bonded functional materials. In this case, AAA and DDD, starting from 4-methoxybenzaldehyde, were synthesized with the Hantzsch pyridine synthesis and Friedländer annulation reaction. The association constant determined by fluorescence titration in chloroform at room temperature is 2.09×10(7)  M(-1) . The AAA and DDD components are not coplanar, but form a V shape in the solid state. Supramolecular polymers based on AAA-DDD triply hydrogen bonded have also been developed. This work may make AAA-DDD triply hydrogen-bonded sequences easily accessible for stimuli-responsive materials. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Statistical Significance Testing in Second Language Research: Basic Problems and Suggestions for Reform

    ERIC Educational Resources Information Center

    Norris, John M.

    2015-01-01

    Traditions of statistical significance testing in second language (L2) quantitative research are strongly entrenched in how researchers design studies, select analyses, and interpret results. However, statistical significance tests using "p" values are commonly misinterpreted by researchers, reviewers, readers, and others, leading to…

  8. A flexible computational framework for detecting, characterizing, and interpreting statistical patterns of epistasis in genetic studies of human disease susceptibility.

    PubMed

    Moore, Jason H; Gilbert, Joshua C; Tsai, Chia-Ti; Chiang, Fu-Tien; Holden, Todd; Barney, Nate; White, Bill C

    2006-07-21

    Detecting, characterizing, and interpreting gene-gene interactions or epistasis in studies of human disease susceptibility is both a mathematical and a computational challenge. To address this problem, we have previously developed a multifactor dimensionality reduction (MDR) method for collapsing high-dimensional genetic data into a single dimension (i.e. constructive induction) thus permitting interactions to be detected in relatively small sample sizes. In this paper, we describe a comprehensive and flexible framework for detecting and interpreting gene-gene interactions that utilizes advances in information theory for selecting interesting single-nucleotide polymorphisms (SNPs), MDR for constructive induction, machine learning methods for classification, and finally graphical models for interpretation. We illustrate the usefulness of this strategy using artificial datasets simulated from several different two-locus and three-locus epistasis models. We show that the accuracy, sensitivity, specificity, and precision of a naïve Bayes classifier are significantly improved when SNPs are selected based on their information gain (i.e. class entropy removed) and reduced to a single attribute using MDR. We then apply this strategy to detecting, characterizing, and interpreting epistatic models in a genetic study (n = 500) of atrial fibrillation and show that both classification and model interpretation are significantly improved.

  9. A basic introduction to statistics for the orthopaedic surgeon.

    PubMed

    Bertrand, Catherine; Van Riet, Roger; Verstreken, Frederik; Michielsen, Jef

    2012-02-01

    Orthopaedic surgeons should review the orthopaedic literature in order to keep pace with the latest insights and practices. A good understanding of basic statistical principles is of crucial importance to the ability to read articles critically, to interpret results and to arrive at correct conclusions. This paper explains some of the key concepts in statistics, including hypothesis testing, Type I and Type II errors, testing of normality, sample size and p values.

  10. Quantifying Treatment Benefit in Molecular Subgroups to Assess a Predictive Biomarker

    PubMed Central

    Iasonos, Alexia; Chapman, Paul B.; Satagopan, Jaya M.

    2016-01-01

    There is an increased interest in finding predictive biomarkers that can guide treatment options for both mutation carriers and non-carriers. The statistical assessment of variation in treatment benefit (TB) according to the biomarker carrier status plays an important role in evaluating predictive biomarkers. For time to event endpoints, the hazard ratio (HR) for interaction between treatment and a biomarker from a Proportional Hazards regression model is commonly used as a measure of variation in treatment benefit. While this can be easily obtained using available statistical software packages, the interpretation of HR is not straightforward. In this article, we propose different summary measures of variation in TB on the scale of survival probabilities for evaluating a predictive biomarker. The proposed summary measures can be easily interpreted as quantifying differential in TB in terms of relative risk or excess absolute risk due to treatment in carriers versus non-carriers. We illustrate the use and interpretation of the proposed measures using data from completed clinical trials. We encourage clinical practitioners to interpret variation in TB in terms of measures based on survival probabilities, particularly in terms of excess absolute risk, as opposed to HR. PMID:27141007

  11. Dose impact in radiographic lung injury following lung SBRT: Statistical analysis and geometric interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Victoria; Kishan, Amar U.; Cao, Minsong

    2014-03-15

    Purpose: To demonstrate a new method of evaluating dose response of treatment-induced lung radiographic injury post-SBRT (stereotactic body radiotherapy) treatment and the discovery of bimodal dose behavior within clinically identified injury volumes. Methods: Follow-up CT scans at 3, 6, and 12 months were acquired from 24 patients treated with SBRT for stage-1 primary lung cancers or oligometastic lesions. Injury regions in these scans were propagated to the planning CT coordinates by performing deformable registration of the follow-ups to the planning CTs. A bimodal behavior was repeatedly observed from the probability distribution for dose values within the deformed injury regions. Basedmore » on a mixture-Gaussian assumption, an Expectation-Maximization (EM) algorithm was used to obtain characteristic parameters for such distribution. Geometric analysis was performed to interpret such parameters and infer the critical dose level that is potentially inductive of post-SBRT lung injury. Results: The Gaussian mixture obtained from the EM algorithm closely approximates the empirical dose histogram within the injury volume with good consistency. The average Kullback-Leibler divergence values between the empirical differential dose volume histogram and the EM-obtained Gaussian mixture distribution were calculated to be 0.069, 0.063, and 0.092 for the 3, 6, and 12 month follow-up groups, respectively. The lower Gaussian component was located at approximately 70% prescription dose (35 Gy) for all three follow-up time points. The higher Gaussian component, contributed by the dose received by planning target volume, was located at around 107% of the prescription dose. Geometrical analysis suggests the mean of the lower Gaussian component, located at 35 Gy, as a possible indicator for a critical dose that induces lung injury after SBRT. Conclusions: An innovative and improved method for analyzing the correspondence between lung radiographic injury and SBRT treatment dose

  12. Basic Statistical Concepts and Methods for Earth Scientists

    USGS Publications Warehouse

    Olea, Ricardo A.

    2008-01-01

    INTRODUCTION Statistics is the science of collecting, analyzing, interpreting, modeling, and displaying masses of numerical data primarily for the characterization and understanding of incompletely known systems. Over the years, these objectives have lead to a fair amount of analytical work to achieve, substantiate, and guide descriptions and inferences.

  13. Patients and Medical Statistics

    PubMed Central

    Woloshin, Steven; Schwartz, Lisa M; Welch, H Gilbert

    2005-01-01

    BACKGROUND People are increasingly presented with medical statistics. There are no existing measures to assess their level of interest or confidence in using medical statistics. OBJECTIVE To develop 2 new measures, the STAT-interest and STAT-confidence scales, and assess their reliability and validity. DESIGN Survey with retest after approximately 2 weeks. SUBJECTS Two hundred and twenty-four people were recruited from advertisements in local newspapers, an outpatient clinic waiting area, and a hospital open house. MEASURES We developed and revised 5 items on interest in medical statistics and 3 on confidence understanding statistics. RESULTS Study participants were mostly college graduates (52%); 25% had a high school education or less. The mean age was 53 (range 20 to 84) years. Most paid attention to medical statistics (6% paid no attention). The mean (SD) STAT-interest score was 68 (17) and ranged from 15 to 100. Confidence in using statistics was also high: the mean (SD) STAT-confidence score was 65 (19) and ranged from 11 to 100. STAT-interest and STAT-confidence scores were moderately correlated (r=.36, P<.001). Both scales demonstrated good test–retest repeatability (r=.60, .62, respectively), internal consistency reliability (Cronbach's α=0.70 and 0.78), and usability (individual item nonresponse ranged from 0% to 1.3%). Scale scores correlated only weakly with scores on a medical data interpretation test (r=.15 and .26, respectively). CONCLUSION The STAT-interest and STAT-confidence scales are usable and reliable. Interest and confidence were only weakly related to the ability to actually use data. PMID:16307623

  14. Descriptive statistics: the specification of statistical measures and their presentation in tables and graphs. Part 7 of a series on evaluation of scientific publications.

    PubMed

    Spriestersbach, Albert; Röhrig, Bernd; du Prel, Jean-Baptist; Gerhold-Ay, Aslihan; Blettner, Maria

    2009-09-01

    Descriptive statistics are an essential part of biometric analysis and a prerequisite for the understanding of further statistical evaluations, including the drawing of inferences. When data are well presented, it is usually obvious whether the author has collected and evaluated them correctly and in keeping with accepted practice in the field. Statistical variables in medicine may be of either the metric (continuous, quantitative) or categorical (nominal, ordinal) type. Easily understandable examples are given. Basic techniques for the statistical description of collected data are presented and illustrated with examples. The goal of a scientific study must always be clearly defined. The definition of the target value or clinical endpoint determines the level of measurement of the variables in question. Nearly all variables, whatever their level of measurement, can be usefully presented graphically and numerically. The level of measurement determines what types of diagrams and statistical values are appropriate. There are also different ways of presenting combinations of two independent variables graphically and numerically. The description of collected data is indispensable. If the data are of good quality, valid and important conclusions can already be drawn when they are properly described. Furthermore, data description provides a basis for inferential statistics.

  15. Principles of Statistics: What the Sports Medicine Professional Needs to Know.

    PubMed

    Riemann, Bryan L; Lininger, Monica R

    2018-07-01

    Understanding the results and statistics reported in original research remains a large challenge for many sports medicine practitioners and, in turn, may be among one of the biggest barriers to integrating research into sports medicine practice. The purpose of this article is to provide minimal essentials a sports medicine practitioner needs to know about interpreting statistics and research results to facilitate the incorporation of the latest evidence into practice. Topics covered include the difference between statistical significance and clinical meaningfulness; effect sizes and confidence intervals; reliability statistics, including the minimal detectable difference and minimal important difference; and statistical power. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Statistical results on restorative dentistry experiments: effect of the interaction between main variables

    PubMed Central

    CAVALCANTI, Andrea Nóbrega; MARCHI, Giselle Maria; AMBROSANO, Gláucia Maria Bovi

    2010-01-01

    Statistical analysis interpretation is a critical field in scientific research. When there is more than one main variable being studied in a research, the effect of the interaction between those variables is fundamental on experiments discussion. However, some doubts can occur when the p-value of the interaction is greater than the significance level. Objective To determine the most adequate interpretation for factorial experiments with p-values of the interaction nearly higher than the significance level. Materials and methods The p-values of the interactions found in two restorative dentistry experiments (0.053 and 0.068) were interpreted in two distinct ways: considering the interaction as not significant and as significant. Results Different findings were observed between the two analyses, and studies results became more coherent when the significant interaction was used. Conclusion The p-value of the interaction between main variables must be analyzed with caution because it can change the outcomes of research studies. Researchers are strongly advised to interpret carefully the results of their statistical analysis in order to discuss the findings of their experiments properly. PMID:20857003

  17. Interpretations

    NASA Astrophysics Data System (ADS)

    Bellac, Michel Le

    2014-11-01

    Although nobody can question the practical efficiency of quantum mechanics, there remains the serious question of its interpretation. As Valerio Scarani puts it, "We do not feel at ease with the indistinguishability principle (that is, the superposition principle) and some of its consequences." Indeed, this principle which pervades the quantum world is in stark contradiction with our everyday experience. From the very beginning of quantum mechanics, a number of physicists--but not the majority of them!--have asked the question of its "interpretation". One may simply deny that there is a problem: according to proponents of the minimalist interpretation, quantum mechanics is self-sufficient and needs no interpretation. The point of view held by a majority of physicists, that of the Copenhagen interpretation, will be examined in Section 10.1. The crux of the problem lies in the status of the state vector introduced in the preceding chapter to describe a quantum system, which is no more than a symbolic representation for the Copenhagen school of thought. Conversely, one may try to attribute some "external reality" to this state vector, that is, a correspondence between the mathematical description and the physical reality. In this latter case, it is the measurement problem which is brought to the fore. In 1932, von Neumann was first to propose a global approach, in an attempt to build a purely quantum theory of measurement examined in Section 10.2. This theory still underlies modern approaches, among them those grounded on decoherence theory, or on the macroscopic character of the measuring apparatus: see Section 10.3. Finally, there are non-standard interpretations such as Everett's many worlds theory or the hidden variables theory of de Broglie and Bohm (Section 10.4). Note, however, that this variety of interpretations has no bearing whatsoever on the practical use of quantum mechanics. There is no controversy on the way we should use quantum mechanics!

  18. New Jersey StreamStats: A web application for streamflow statistics and basin characteristics

    USGS Publications Warehouse

    Watson, Kara M.; Janowicz, Jon A.

    2017-08-02

    StreamStats is an interactive, map-based web application from the U.S. Geological Survey (USGS) that allows users to easily obtain streamflow statistics and watershed characteristics for both gaged and ungaged sites on streams throughout New Jersey. Users can determine flood magnitude and frequency, monthly flow-duration, monthly low-flow frequency statistics, and watershed characteristics for ungaged sites by selecting a point along a stream, or they can obtain this information for streamgages by selecting a streamgage location on the map. StreamStats provides several additional tools useful for water-resources planning and management, as well as for engineering purposes. StreamStats is available for most states and some river basins through a single web portal.Streamflow statistics for water resources professionals include the 1-percent annual chance flood flow (100-year peak flow) used to define flood plain areas and the monthly 7-day, 10-year low flow (M7D10Y) used in water supply management and studies of recreation, wildlife conservation, and wastewater dilution. Additionally, watershed or basin characteristics, including drainage area, percent area forested, and average percent of impervious areas, are commonly used in land-use planning and environmental assessments. These characteristics are easily derived through StreamStats.

  19. Interpretation miniatures

    NASA Astrophysics Data System (ADS)

    Nikolić, Hrvoje

    Most physicists do not have patience for reading long and obscure interpretation arguments and disputes. Hence, to attract attention of a wider physics community, in this paper various old and new aspects of quantum interpretations are explained in a concise and simple (almost trivial) form. About the “Copenhagen” interpretation, we note that there are several different versions of it and explain how to make sense of “local nonreality” interpretation. About the many-world interpretation (MWI), we explain that it is neither local nor nonlocal, that it cannot explain the Born rule, that it suffers from the preferred basis problem, and that quantum suicide cannot be used to test it. About the Bohmian interpretation, we explain that it is analogous to dark matter, use it to explain that there is no big difference between nonlocal correlation and nonlocal causation, and use some condensed-matter ideas to outline how nonrelativistic Bohmian theory could be a theory of everything. We also explain how different interpretations can be used to demystify the delayed choice experiment, to resolve the problem of time in quantum gravity, and to provide alternatives to quantum nonlocality. Finally, we explain why is life compatible with the second law.

  20. Mathematics pre-service teachers’ statistical reasoning about meaning

    NASA Astrophysics Data System (ADS)

    Kristanto, Y. D.

    2018-01-01

    This article offers a descriptive qualitative analysis of 3 second-year pre-service teachers’ statistical reasoning about the mean. Twenty-six pre-service teachers were tested using an open-ended problem where they were expected to analyze a method in finding the mean of a data. Three of their test results are selected to be analyzed. The results suggest that the pre-service teachers did not use context to develop the interpretation of mean. Therefore, this article also offers strategies to promote statistical reasoning about mean that use various contexts.

  1. Evidence of Pragmatic Impairments in Speech and Proverb Interpretation in Schizophrenia.

    PubMed

    Haas, Marc H; Chance, Steven A; Cram, David F; Crow, Tim J; Luc, Aslan; Hage, Sarah

    2015-08-01

    Schizophrenia has been suggested to involve linguistic pragmatic deficits. In this study, two aspects of pragmatic ability were assessed; comprehension and production. Drawing on relevance theory and Gricean implicatures to assess shared attention and interpretation in a linguistic context, discourse samples and proverb interpretation were transcribed from recorded interviews with patients with schizophrenia and control subjects. The productive aspect of implicatures was assessed by quantifying the use of 'connectors' in discourse. Receptive aspects were assessed by scoring interpretations of four common proverbs. Statistically significant effects were found: patients with schizophrenia used connectors less than controls as well as performing worse in proverb comprehension. Positive correlations between connectors and proverb interpretation in all subjects suggested an underlying pragmatic root for both productive and receptive aspects. The relative number of connectors (as a percentage of words used) provided a better index of pragmatic ability than total number because total output appeared to be influenced by additional factors such as IQ. Deficits were found in the use of connectors and in proverb interpretation even when controlling for verbal IQ, suggesting that pragmatic aspects of language are particularly vulnerable in schizophrenia compared with other verbal abilities.

  2. Advanced statistical energy analysis

    NASA Astrophysics Data System (ADS)

    Heron, K. H.

    1994-09-01

    A high-frequency theory (advanced statistical energy analysis (ASEA)) is developed which takes account of the mechanism of tunnelling and uses a ray theory approach to track the power flowing around a plate or a beam network and then uses statistical energy analysis (SEA) to take care of any residual power. ASEA divides the energy of each sub-system into energy that is freely available for transfer to other sub-systems and energy that is fixed within the sub-systems that are physically separate and can be interpreted as a series of mathematical models, the first of which is identical to standard SEA and subsequent higher order models are convergent on an accurate prediction. Using a structural assembly of six rods as an example, ASEA is shown to converge onto the exact results while SEA is shown to overpredict by up to 60 dB.

  3. Smart Interpretation - Application of Machine Learning in Geological Interpretation of AEM Data

    NASA Astrophysics Data System (ADS)

    Bach, T.; Gulbrandsen, M. L.; Jacobsen, R.; Pallesen, T. M.; Jørgensen, F.; Høyer, A. S.; Hansen, T. M.

    2015-12-01

    When using airborne geophysical measurements in e.g. groundwater mapping, an overwhelming amount of data is collected. Increasingly larger survey areas, denser data collection and limited resources, combines to an increasing problem of building geological models that use all the available data in a manner that is consistent with the geologists knowledge about the geology of the survey area. In the ERGO project, funded by The Danish National Advanced Technology Foundation, we address this problem, by developing new, usable tools, enabling the geologist utilize her geological knowledge directly in the interpretation of the AEM data, and thereby handle the large amount of data, In the project we have developed the mathematical basis for capturing geological expertise in a statistical model. Based on this, we have implemented new algorithms that have been operationalized and embedded in user friendly software. In this software, the machine learning algorithm, Smart Interpretation, enables the geologist to use the system as an assistant in the geological modelling process. As the software 'learns' the geology from the geologist, the system suggest new modelling features in the data. In this presentation we demonstrate the application of the results from the ERGO project, including the proposed modelling workflow utilized on a variety of data examples.

  4. The extended statistical analysis of toxicity tests using standardised effect sizes (SESs): a comparison of nine published papers.

    PubMed

    Festing, Michael F W

    2014-01-01

    The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error) due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs) a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL) was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.

  5. Combining Statistical Samples of Resolved-ISM Simulated Galaxies with Realistic Mock Observations to Fully Interpret HST and JWST Surveys

    NASA Astrophysics Data System (ADS)

    Faucher-Giguere, Claude-Andre

    2016-10-01

    HST has invested thousands of orbits to complete multi-wavelength surveys of high-redshift galaxies including the Deep Fields, COSMOS, 3D-HST and CANDELS. Over the next few years, JWST will undertake complementary, spatially-resolved infrared observations. Cosmological simulations are the most powerful tool to make detailed predictions for the properties of galaxy populations and to interpret these surveys. We will leverage recent major advances in the predictive power of cosmological hydrodynamic simulations to produce the first statistical sample of hundreds of galaxies simulated with 10 pc resolution and with explicit interstellar medium and stellar feedback physics proved to simultaneously reproduce the galaxy stellar mass function, the chemical enrichment of galaxies, and the neutral hydrogen content of galaxy halos. We will process our new set of full-volume cosmological simulations, called FIREBOX, with a mock imaging and spectral synthesis pipeline to produce realistic mock HST and JWST observations, including spatially-resolved photometry and spectroscopy. By comparing FIREBOX with recent high-redshift HST surveys, we will study the stellar build up of galaxies, the evolution massive star-forming clumps, their contribution to bulge growth, the connection of bulges to star formation quenching, and the triggering mechanisms of AGN activity. Our mock data products will also enable us to plan future JWST observing programs. We will publicly release all our mock data products to enable HST and JWST science beyond our own analysis, including with the Frontier Fields.

  6. Cross Talk: Evaluation of a Curriculum to Teach Medical Students How to Use Telephone Interpreter Services.

    PubMed

    Omoruyi, Emma A; Dunkle, Jesse; Dendy, Colby; McHugh, Erin; Barratt, Michelle S

    2018-03-01

    Telephone interpretation and recent technology advances assist patients with more timely access to rare languages, but no one has examined the role of this technology in the medical setting and how medical students can be prepared for their use. We sought to determine if structured curriculum on interpretation would promote learners self-reported competency in these encounters and if proficiency would be demonstrated in actual patient encounters. Training on the principles of interpreter use with a focus on communication technology was added to medical student education. The students later voluntarily completed a retrospective pre/post training competency self-assessment. A cohort of students rotating at a clinical site had a blinded review of their telephone interpretation encounters scored on a modified validated scale and compared to scored encounters with preintervention learners. Nested ANOVA models were used for audio file analysis. A total of 176 students who completed the training reported a statistically significant improvement in all 4 interpretation competency domains. Eighty-three audio files were analyzed from students before and after intervention. These scored encounters showed no statistical difference between the scores of the 2 groups. However, plotting the mean scores over time from each encounter suggests that those who received the curriculum started their rotation with higher scores and maintained those scores. In an evaluation of learners' ability to use interpreters in actual patient encounters, focused education led to earlier proficiency of using interpreters compared to peers who received no training. Copyright © 2018 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  7. Novice Interpretations of Progress Monitoring Graphs: Extreme Values and Graphical Aids

    ERIC Educational Resources Information Center

    Newell, Kirsten W.; Christ, Theodore J.

    2017-01-01

    Curriculum-Based Measurement of Reading (CBM-R) is frequently used to monitor instructional effects and evaluate response to instruction. Educators often view the data graphically on a time-series graph that might include a variety of statistical and visual aids, which are intended to facilitate the interpretation. This study evaluated the effects…

  8. Development of computer-assisted instruction application for statistical data analysis android platform as learning resource

    NASA Astrophysics Data System (ADS)

    Hendikawati, P.; Arifudin, R.; Zahid, M. Z.

    2018-03-01

    This study aims to design an android Statistics Data Analysis application that can be accessed through mobile devices to making it easier for users to access. The Statistics Data Analysis application includes various topics of basic statistical along with a parametric statistics data analysis application. The output of this application system is parametric statistics data analysis that can be used for students, lecturers, and users who need the results of statistical calculations quickly and easily understood. Android application development is created using Java programming language. The server programming language uses PHP with the Code Igniter framework, and the database used MySQL. The system development methodology used is the Waterfall methodology with the stages of analysis, design, coding, testing, and implementation and system maintenance. This statistical data analysis application is expected to support statistical lecturing activities and make students easier to understand the statistical analysis of mobile devices.

  9. Patient movement characteristics and the impact on CBCT image quality and interpretability.

    PubMed

    Spin-Neto, Rubens; Costa, Cláudio; Salgado, Daniela Mra; Zambrana, Nataly Rm; Gotfredsen, Erik; Wenzel, Ann

    2018-01-01

    To assess the impact of patient movement characteristics and metal/radiopaque materials in the field-of-view (FOV) on CBCT image quality and interpretability. 162 CBCT examinations were performed in 134 consecutive (i.e. prospective data collection) patients (age average: 27.2 years; range: 9-73). An accelerometer-gyroscope system registered patient's head position during examination. The threshold for movement definition was set at ≥0.5-mm movement distance based on accelerometer-gyroscope recording. Movement complexity was defined as uniplanar/multiplanar. Three observers scored independently: presence of stripe (i.e. streak) artefacts (absent/"enamel stripes"/"metal stripes"/"movement stripes"), overall unsharpness (absent/present) and image interpretability (interpretable/not interpretable). Kappa statistics assessed interobserver agreement. χ 2 tests analysed whether movement distance, movement complexity and metal/radiopaque material in the FOV affected image quality and image interpretability. Relevant risk factors (p ≤ 0.20) were entered into a multivariate logistic regression analysis with "not interpretable" as the outcome. Interobserver agreement for image interpretability was good (average = 0.65). Movement distance and presence of metal/radiopaque materials significantly affected image quality and interpretability. There were 22-28 cases, in which the observers stated the image was not interpretable. Small movements (i.e. <3 mm) did not significantly affect image interpretability. For movements ≥ 3 mm, the risk that a case was scored as "not interpretable" was significantly (p ≤ 0.05) increased [OR 3.2-11.3; 95% CI (0.70-65.47)]. Metal/radiopaque material was also a significant (p ≤ 0.05) risk factor (OR 3.61-5.05). Patient movement ≥3 mm and metal/radiopaque material in the FOV significantly affected CBCT image quality and interpretability.

  10. Interpretative commenting.

    PubMed

    Vasikaran, Samuel

    2008-08-01

    * Clinical laboratories should be able to offer interpretation of the results they produce. * At a minimum, contact details for interpretative advice should be available on laboratory reports.Interpretative comments may be verbal or written and printed. * Printed comments on reports should be offered judiciously, only where they would add value; no comment preferred to inappropriate or dangerous comment. * Interpretation should be based on locally agreed or nationally recognised clinical guidelines where available. * Standard tied comments ("canned" comments) can have some limited use.Individualised narrative comments may be particularly useful in the case of tests that are new, complex or unfamiliar to the requesting clinicians and where clinical details are available. * Interpretative commenting should only be provided by appropriately trained and credentialed personnel. * Audit of comments and continued professional development of personnel providing them are important for quality assurance.

  11. Interpretative Commenting

    PubMed Central

    Vasikaran, Samuel

    2008-01-01

    Summary Clinical laboratories should be able to offer interpretation of the results they produce.At a minimum, contact details for interpretative advice should be available on laboratory reports.Interpretative comments may be verbal or written and printed.Printed comments on reports should be offered judiciously, only where they would add value; no comment preferred to inappropriate or dangerous comment.Interpretation should be based on locally agreed or nationally recognised clinical guidelines where available.Standard tied comments (“canned” comments) can have some limited use.Individualised narrative comments may be particularly useful in the case of tests that are new, complex or unfamiliar to the requesting clinicians and where clinical details are available.Interpretative commenting should only be provided by appropriately trained and credentialed personnel.Audit of comments and continued professional development of personnel providing them are important for quality assurance. PMID:18852867

  12. Statistics corner: A guide to appropriate use of correlation coefficient in medical research.

    PubMed

    Mukaka, M M

    2012-09-01

    Correlation is a statistical method used to assess a possible linear association between two continuous variables. It is simple both to calculate and to interpret. However, misuse of correlation is so common among researchers that some statisticians have wished that the method had never been devised at all. The aim of this article is to provide a guide to appropriate use of correlation in medical research and to highlight some misuse. Examples of the applications of the correlation coefficient have been provided using data from statistical simulations as well as real data. Rule of thumb for interpreting size of a correlation coefficient has been provided.

  13. Using Statistics to Lie, Distort, and Abuse Data

    ERIC Educational Resources Information Center

    Bintz, William; Moore, Sara; Adams, Cheryll; Pierce, Rebecca

    2009-01-01

    Statistics is a branch of mathematics that involves organization, presentation, and interpretation of data, both quantitative and qualitative. Data do not lie, but people do. On the surface, quantitative data are basically inanimate objects, nothing more than lifeless and meaningless symbols that appear on a page, calculator, computer, or in one's…

  14. The Impact of Language Experience on Language and Reading: A Statistical Learning Approach

    ERIC Educational Resources Information Center

    Seidenberg, Mark S.; MacDonald, Maryellen C.

    2018-01-01

    This article reviews the important role of statistical learning for language and reading development. Although statistical learning--the unconscious encoding of patterns in language input--has become widely known as a force in infants' early interpretation of speech, the role of this kind of learning for language and reading comprehension in…

  15. Ukrainian-Speaking Migrants’ Concerning the Use of Interpreters in Healthcare Service: A Pilot Study

    PubMed Central

    Hadziabdic, Emina

    2016-01-01

    The aim of this pilot study was to investigate Ukrainian-speaking migrants’ attitudes to the use of interpreters in healthcare service in order to test a developed questionnaire and recruitment strategy. A descriptive survey using a 51-item structured self-administered questionnaire of 12 Ukrainian-speaking migrants’ and analyzed by the descriptive statistics. The findings were to have an interpreter as an objective communication and practical aid with personal qualities such as a good knowledge of languages and translation ability. In contrast, the clothes worn by the interpreter and the interpreter’s religion were not viewed as important aspects. The findings support the method of a developed questionnaire and recruitment strategy, which in turn can be used in a larger planned investigation of the same topic in order to arrange a good interpretation situation in accordance with persons’ desire irrespective of countries’ different rules in healthcare policies regarding interpretation. PMID:27014391

  16. Power spectra as a diagnostic tool in probing statistical/nonstatistical behavior in unimolecular reactions

    NASA Astrophysics Data System (ADS)

    Chang, Xiaoyen Y.; Sewell, Thomas D.; Raff, Lionel M.; Thompson, Donald L.

    1992-11-01

    The possibility of utilizing different types of power spectra obtained from classical trajectories as a diagnostic tool to identify the presence of nonstatistical dynamics is explored by using the unimolecular bond-fission reactions of 1,2-difluoroethane and the 2-chloroethyl radical as test cases. In previous studies, the reaction rates for these systems were calculated by using a variational transition-state theory and classical trajectory methods. A comparison of the results showed that 1,2-difluoroethane is a nonstatistical system, while the 2-chloroethyl radical behaves statistically. Power spectra for these two systems have been generated under various conditions. The characteristics of these spectra are as follows: (1) The spectra for the 2-chloroethyl radical are always broader and more coupled to other modes than is the case for 1,2-difluoroethane. This is true even at very low levels of excitation. (2) When an internal energy near or above the dissociation threshold is initially partitioned into a local C-H stretching mode, the power spectra for 1,2-difluoroethane broaden somewhat, but discrete and somewhat isolated bands are still clearly evident. In contrast, the analogous power spectra for the 2-chloroethyl radical exhibit a near complete absence of isolated bands. The general appearance of the spectrum suggests a very high level of mode-to-mode coupling, large intramolecular vibrational energy redistribution (IVR) rates, and global statistical behavior. (3) The appearance of the power spectrum for the 2-chloroethyl radical is unaltered regardless of whether the initial C-H excitation is in the CH2 or the CH2Cl group. This result also suggests statistical behavior. These results are interpreted to mean that power spectra may be used as a diagnostic tool to assess the statistical character of a system. The presence of a diffuse spectrum exhibiting a nearly complete loss of isolated structures indicates that the dissociation dynamics of the molecule will

  17. SDE decomposition and A-type stochastic interpretation in nonequilibrium processes

    NASA Astrophysics Data System (ADS)

    Yuan, Ruoshi; Tang, Ying; Ao, Ping

    2017-12-01

    An innovative theoretical framework for stochastic dynamics based on the decomposition of a stochastic differential equation (SDE) into a dissipative component, a detailed-balance-breaking component, and a dual-role potential landscape has been developed, which has fruitful applications in physics, engineering, chemistry, and biology. It introduces the A-type stochastic interpretation of the SDE beyond the traditional Ito or Stratonovich interpretation or even the α-type interpretation for multidimensional systems. The potential landscape serves as a Hamiltonian-like function in nonequilibrium processes without detailed balance, which extends this important concept from equilibrium statistical physics to the nonequilibrium region. A question on the uniqueness of the SDE decomposition was recently raised. Our review of both the mathematical and physical aspects shows that uniqueness is guaranteed. The demonstration leads to a better understanding of the robustness of the novel framework. In addition, we discuss related issues including the limitations of an approach to obtaining the potential function from a steady-state distribution.

  18. Instructional Sensitivity Statistics Appropriate for Objectives-Based Test Items. CSE Report No. 91.

    ERIC Educational Resources Information Center

    Kosecoff, Jacqueline B.; Klein, Stephen P.

    Two types of sensitivity indices were developed in this paper, one internal to the total test and the second external. To evaluate the success of these statistics the three criteria suggested for a satisfactory index of item quality were considered. The Internal Sensitivity Index appears to meet these demands. Certainly it is easily computed. In…

  19. Use of the dynamic stiffness method to interpret experimental data from a nonlinear system

    NASA Astrophysics Data System (ADS)

    Tang, Bin; Brennan, M. J.; Gatti, G.

    2018-05-01

    The interpretation of experimental data from nonlinear structures is challenging, primarily because of dependency on types and levels of excitation, and coupling issues with test equipment. In this paper, the use of the dynamic stiffness method, which is commonly used in the analysis of linear systems, is used to interpret the data from a vibration test of a controllable compressed beam structure coupled to a test shaker. For a single mode of the system, this method facilitates the separation of mass, stiffness and damping effects, including nonlinear stiffness effects. It also allows the separation of the dynamics of the shaker from the structure under test. The approach needs to be used with care, and is only suitable if the nonlinear system has a response that is predominantly at the excitation frequency. For the structure under test, the raw experimental data revealed little about the underlying causes of the dynamic behaviour. However, the dynamic stiffness approach allowed the effects due to the nonlinear stiffness to be easily determined.

  20. Environmental statistics and optimal regulation

    NASA Astrophysics Data System (ADS)

    Sivak, David; Thomson, Matt

    2015-03-01

    The precision with which an organism can detect its environment, and the timescale for and statistics of environmental change, will affect the suitability of different strategies for regulating protein levels in response to environmental inputs. We propose a general framework--here applied to the enzymatic regulation of metabolism in response to changing nutrient concentrations--to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, and the costs associated with enzyme production. We find: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.

  1. Statistical geometric affinity in human brain electric activity

    NASA Astrophysics Data System (ADS)

    Chornet-Lurbe, A.; Oteo, J. A.; Ros, J.

    2007-05-01

    The representation of the human electroencephalogram (EEG) records by neurophysiologists demands standardized time-amplitude scales for their correct conventional interpretation. In a suite of graphical experiments involving scaling affine transformations we have been able to convert electroencephalogram samples corresponding to any particular sleep phase and relaxed wakefulness into each other. We propound a statistical explanation for that finding in terms of data collapse. As a sequel, we determine characteristic time and amplitude scales and outline a possible physical interpretation. An analysis for characteristic times based on lacunarity is also carried out as well as a study of the synchrony between left and right EEG channels.

  2. Differing Interpretations of Report Terminology Between Primary Care Physicians and Radiologists.

    PubMed

    Gunn, Andrew J; Tuttle, Mitch C; Flores, Efren J; Mangano, Mark D; Bennett, Susan E; Sahani, Dushyant V; Choy, Garry; Boland, Giles W

    2016-12-01

    The lexicons of the radiologist and the referring physician may not be synonymous, which could cause confusion with radiology reporting. To further explore this possibility, we surveyed radiologists and primary care physicians (PCPs) regarding their respective interpretations of report terminology. A survey was distributed to radiologists and PCPs through an internal listserv. Respondents were asked to provide an interpretation of the statistical likelihood of the presence of metastatic disease based upon the terminology used within a hypothetical radiology report. Ten common modifying terms were evaluated. Potential responses for the statistical likelihoods included 0%-25%, 26%-50%, 51%-75%, 76%-99%, and 100%. Differences between the groups were evaluated using either a χ 2 test or Fisher exact test, as appropriate. The phrases "diagnostic for metastatic disease" and "represents metastatic disease" were selected by a high percentage of both groups as conferring a 100% likelihood of "true metastatic disease." The phrases "cannot exclude metastatic disease" and "may represent metastatic disease" were selected by a high proportion of both groups as conferring a 0% likelihood of "true metastatic disease." Radiologists assigned a higher statistical likelihood to the terms "diagnostic for metastatic disease" (P = .016), "represents metastatic disease" (P = .004), "suspicious for metastatic disease" (P = .04), "consistent with metastatic disease" (P < .0001), and "compatible with metastatic disease" (P = .003). A qualitative agreement among radiologists and PCPs exists concerning the significance of the evaluated terminology, although radiologists assigned a higher statistical likelihood than PCPs for several phrases. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  3. Application of multivariate statistical techniques in microbial ecology

    PubMed Central

    Paliy, O.; Shankar, V.

    2016-01-01

    Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large scale ecological datasets. Especially noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions, and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amounts of data, powerful statistical techniques of multivariate analysis are well suited to analyze and interpret these datasets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular dataset. In this review we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive, and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and dataset structure. PMID:26786791

  4. Application of multivariate statistical techniques in microbial ecology.

    PubMed

    Paliy, O; Shankar, V

    2016-03-01

    Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large-scale ecological data sets. In particular, noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amount of data, powerful statistical techniques of multivariate analysis are well suited to analyse and interpret these data sets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular data set. In this review, we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and data set structure. © 2016 John Wiley & Sons Ltd.

  5. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

    PubMed Central

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

  6. Penultimate interpretation.

    PubMed

    Neuman, Yair

    2010-10-01

    Interpretation is at the center of psychoanalytic activity. However, interpretation is always challenged by that which is beyond our grasp, the 'dark matter' of our mind, what Bion describes as ' O'. O is one of the most central and difficult concepts in Bion's thought. In this paper, I explain the enigmatic nature of O as a high-dimensional mental space and point to the price one should pay for substituting the pre-symbolic lexicon of the emotion-laden and high-dimensional unconscious for a low-dimensional symbolic representation. This price is reification--objectifying lived experience and draining it of vitality and complexity. In order to address the difficulty of approaching O through symbolization, I introduce the term 'Penultimate Interpretation'--a form of interpretation that seeks 'loopholes' through which the analyst and the analysand may reciprocally save themselves from the curse of reification. Three guidelines for 'Penultimate Interpretation' are proposed and illustrated through an imaginary dialogue. Copyright © 2010 Institute of Psychoanalysis.

  7. SEDIDAT: A BASIC program for the collection and statistical analysis of particle settling velocity data

    NASA Astrophysics Data System (ADS)

    Wright, Robyn; Thornberg, Steven M.

    SEDIDAT is a series of compiled IBM-BASIC (version 2.0) programs that direct the collection, statistical calculation, and graphic presentation of particle settling velocity and equivalent spherical diameter for samples analyzed using the settling tube technique. The programs follow a menu-driven format that is understood easily by students and scientists with little previous computer experience. Settling velocity is measured directly (cm,sec) and also converted into Chi units. Equivalent spherical diameter (reported in Phi units) is calculated using a modified Gibbs equation for different particle densities. Input parameters, such as water temperature, settling distance, particle density, run time, and Phi;Chi interval are changed easily at operator discretion. Optional output to a dot-matrix printer includes a summary of moment and graphic statistical parameters, a tabulation of individual and cumulative weight percents, a listing of major distribution modes, and cumulative and histogram plots of a raw time, settling velocity. Chi and Phi data.

  8. Crying without a cause and being easily upset in two-year-olds: heritability and predictive power of behavioral problems.

    PubMed

    Groen-Blokhuis, Maria M; Middeldorp, Christel M; M van Beijsterveldt, Catharina E; Boomsma, Dorret I

    2011-10-01

    In order to estimate the influence of genetic and environmental factors on 'crying without a cause' and 'being easily upset' in 2-year-old children, a large twin study was carried out. Prospective data were available for ~18,000 2-year-old twin pairs from the Netherlands Twin Register. A bivariate genetic analysis was performed using structural equation modeling in the Mx software package. The influence of maternal personality characteristics and demographic and lifestyle factors was tested to identify specific risk factors that may underlie the shared environment of twins. Furthermore, it was tested whether crying without a cause and being easily upset were predictive of later internalizing, externalizing and attention problems. Crying without a cause yielded a heritability estimate of 60% in boys and girls. For easily upset, the heritability was estimated at 43% in boys and 31% in girls. The variance explained by shared environment varied between 35% and 63%. The correlation between crying without a cause and easily upset (r = .36) was explained both by genetic and shared environmental factors. Birth cohort, gestational age, socioeconomic status, parental age, parental smoking behavior and alcohol use during pregnancy did not explain the shared environmental component. Neuroticism of the mother explained a small proportion of the additive genetic, but not of the shared environmental effects for easily upset. Crying without a cause and being easily upset at age 2 were predictive of internalizing, externalizing and attention problems at age 7, with effect sizes of .28-.42. A large influence of shared environmental factors on crying without a cause and easily upset was detected. Although these effects could be specific to these items, we could not explain them by personality characteristics of the mother or by demographic and lifestyle factors, and we recognize that these effects may reflect other maternal characteristics. A substantial influence of genetic factors

  9. Chances Are...Making Probability and Statistics Fun To Learn and Easy To Teach.

    ERIC Educational Resources Information Center

    Pfenning, Nancy

    Probability and statistics may be the horror of many college students, but if these subjects are trimmed to include only the essential symbols, they are easily within the grasp of interested middle school or even elementary school students. This book can serve as an introduction for any beginner, from gifted students who would like to broaden…

  10. Statistical Mechanics of Combinatorial Auctions

    NASA Astrophysics Data System (ADS)

    Galla, Tobias; Leone, Michele; Marsili, Matteo; Sellitto, Mauro; Weigt, Martin; Zecchina, Riccardo

    2006-09-01

    Combinatorial auctions are formulated as frustrated lattice gases on sparse random graphs, allowing the determination of the optimal revenue by methods of statistical physics. Transitions between computationally easy and hard regimes are found and interpreted in terms of the geometric structure of the space of solutions. We introduce an iterative algorithm to solve intermediate and large instances, and discuss competing states of optimal revenue and maximal number of satisfied bidders. The algorithm can be generalized to the hard phase and to more sophisticated auction protocols.

  11. Universal statistics of soft gamma-ray repeating (SGR) bursts

    NASA Astrophysics Data System (ADS)

    Kondratyev, V. N.; Korovina, Yu. V.

    2018-01-01

    Soft gamma repeater (SGR) bursts are considered as a release of magnetic energy stored in the baryon degrees of freedom of the magnetar crust. It is shown that this interpretation allows all observations of these bursts to be systematized and universal statistical properties to be revealed and explained.

  12. Recurrence time statistics for finite size intervals

    NASA Astrophysics Data System (ADS)

    Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.

    2004-12-01

    We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.

  13. Factor regression for interpreting genotype-environment interaction in bread-wheat trials.

    PubMed

    Baril, C P

    1992-05-01

    The French INRA wheat (Triticum aestivum L. em Thell.) breeding program is based on multilocation trials to produce high-yielding, adapted lines for a wide range of environments. Differential genotypic responses to variable environment conditions limit the accuracy of yield estimations. Factor regression was used to partition the genotype-environment (GE) interaction into four biologically interpretable terms. Yield data were analyzed from 34 wheat genotypes grown in four environments using 12 auxiliary agronomic traits as genotypic and environmental covariates. Most of the GE interaction (91%) was explained by the combination of only three traits: 1,000-kernel weight, lodging susceptibility and spike length. These traits are easily measured in breeding programs, therefore factor regression model can provide a convenient and useful prediction method of yield.

  14. Statistical Analysis of Time-Series from Monitoring of Active Volcanic Vents

    NASA Astrophysics Data System (ADS)

    Lachowycz, S.; Cosma, I.; Pyle, D. M.; Mather, T. A.; Rodgers, M.; Varley, N. R.

    2016-12-01

    Despite recent advances in the collection and analysis of time-series from volcano monitoring, and the resulting insights into volcanic processes, challenges remain in forecasting and interpreting activity from near real-time analysis of monitoring data. Statistical methods have potential to characterise the underlying structure and facilitate intercomparison of these time-series, and so inform interpretation of volcanic activity. We explore the utility of multiple statistical techniques that could be widely applicable to monitoring data, including Shannon entropy and detrended fluctuation analysis, by their application to various data streams from volcanic vents during periods of temporally variable activity. Each technique reveals changes through time in the structure of some of the data that were not apparent from conventional analysis. For example, we calculate the Shannon entropy (a measure of the randomness of a signal) of time-series from the recent dome-forming eruptions of Volcán de Colima (Mexico) and Soufrière Hills (Montserrat). The entropy of real-time seismic measurements and the count rate of certain volcano-seismic event types from both volcanoes is found to be temporally variable, with these data generally having higher entropy during periods of lava effusion and/or larger explosions. In some instances, the entropy shifts prior to or coincident with changes in seismic or eruptive activity, some of which were not clearly recognised by real-time monitoring. Comparison with other statistics demonstrates the sensitivity of the entropy to the data distribution, but that it is distinct from conventional statistical measures such as coefficient of variation. We conclude that each analysis technique examined could provide valuable insights for interpretation of diverse monitoring time-series.

  15. [Notes on vital statistics for the study of perinatal health].

    PubMed

    Juárez, Sol Pía

    2014-01-01

    Vital statistics, published by the National Statistics Institute in Spain, are a highly important source for the study of perinatal health nationwide. However, the process of data collection is not well-known and has implications both for the quality and interpretation of the epidemiological results derived from this source. The aim of this study was to present how the information is collected and some of the associated problems. This study is the result of an analysis of the methodological notes from the National Statistics Institute and first-hand information obtained from hospitals, the Central Civil Registry of Madrid, and the Madrid Institute for Statistics. Greater integration between these institutions is required to improve the quality of birth and stillbirth statistics. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.

  16. Misleading reporting and interpretation of results in major infertility journals.

    PubMed

    Glujovsky, Demian; Sueldo, Carlos E; Borghi, Carolina; Nicotra, Pamela; Andreucci, Sara; Ciapponi, Agustín

    2016-05-01

    To evaluate the proportion of randomized controlled trials (RCTs) published in top infertility journals indexed on PubMed that reported their results with proper effect estimates and their precision estimation, while correctly interpreting both measures. Cross-sectional study evaluating all the RCTs published in top infertility journals during 2014. Not applicable. Not applicable. Not applicable. Proportion of RCTs that reported both relative and absolute effect size measures and its precision. Among the 32 RCTs published in 2014 in the top infertility journals reviewed, 37.5% (95% confidence interval [CI], 21.1-56.3) did not mention in their abstracts whether the difference among the study arms was statistically or clinically significant, and only 6.3% (95% CI, 0.8-20.8) used a CI of the absolute difference. Similarly, in the results section, these elements were observed in 28.2% (95% CI, 13.7-46.7) and 15.6% (95% CI, 5.3-32.8), respectively. Only one study clearly expressed the minimal clinically important difference in their methods section, but we found related proxies in 53% (95% CI, 34.7-70.9). None of the studies used CIs to draw conclusions about the clinical or statistical significance. We found 13 studies where the interpretation of the findings could be misleading. Recommended reporting items are underused in top infertility journals, which could lead to misleading interpretations. Authors, reviewers, and editorial boards should emphasize their use to improve reporting quality. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  17. Misinterpretation of statistical distance in security of quantum key distribution shown by simulation

    NASA Astrophysics Data System (ADS)

    Iwakoshi, Takehisa; Hirota, Osamu

    2014-10-01

    This study will test an interpretation in quantum key distribution (QKD) that trace distance between the distributed quantum state and the ideal mixed state is a maximum failure probability of the protocol. Around 2004, this interpretation was proposed and standardized to satisfy both of the key uniformity in the context of universal composability and operational meaning of the failure probability of the key extraction. However, this proposal has not been verified concretely yet for many years while H. P. Yuen and O. Hirota have thrown doubt on this interpretation since 2009. To ascertain this interpretation, a physical random number generator was employed to evaluate key uniformity in QKD. In this way, we calculated statistical distance which correspond to trace distance in quantum theory after a quantum measurement is done, then we compared it with the failure probability whether universal composability was obtained. As a result, the degree of statistical distance of the probability distribution of the physical random numbers and the ideal uniformity was very large. It is also explained why trace distance is not suitable to guarantee the security in QKD from the view point of quantum binary decision theory.

  18. Analysis and interpretation of cost data in randomised controlled trials: review of published studies

    PubMed Central

    Barber, Julie A; Thompson, Simon G

    1998-01-01

    Objective To review critically the statistical methods used for health economic evaluations in randomised controlled trials where an estimate of cost is available for each patient in the study. Design Survey of published randomised trials including an economic evaluation with cost values suitable for statistical analysis; 45 such trials published in 1995 were identified from Medline. Main outcome measures The use of statistical methods for cost data was assessed in terms of the descriptive statistics reported, use of statistical inference, and whether the reported conclusions were justified. Results Although all 45 trials reviewed apparently had cost data for each patient, only 9 (20%) reported adequate measures of variability for these data and only 25 (56%) gave results of statistical tests or a measure of precision for the comparison of costs between the randomised groups. Only 16 (36%) of the articles gave conclusions which were justified on the basis of results presented in the paper. No paper reported sample size calculations for costs. Conclusions The analysis and interpretation of cost data from published trials reveal a lack of statistical awareness. Strong and potentially misleading conclusions about the relative costs of alternative therapies have often been reported in the absence of supporting statistical evidence. Improvements in the analysis and reporting of health economic assessments are urgently required. Health economic guidelines need to be revised to incorporate more detailed statistical advice. Key messagesHealth economic evaluations required for important healthcare policy decisions are often carried out in randomised controlled trialsA review of such published economic evaluations assessed whether statistical methods for cost outcomes have been appropriately used and interpretedFew publications presented adequate descriptive information for costs or performed appropriate statistical analysesIn at least two thirds of the papers, the main

  19. Diagnostic concordance among pathologists interpreting breast biopsy specimens.

    PubMed

    Elmore, Joann G; Longton, Gary M; Carney, Patricia A; Geller, Berta M; Onega, Tracy; Tosteson, Anna N A; Nelson, Heidi D; Pepe, Margaret S; Allison, Kimberly H; Schnitt, Stuart J; O'Malley, Frances P; Weaver, Donald L

    2015-03-17

    % CI, 31%-39%) were underinterpreted; and among benign cases without atypia (2070 interpretations), 87% (95% CI, 85%-89%) were concordant and 13% (95% CI, 11%-15%) were overinterpreted. Disagreement with the reference diagnosis was statistically significantly higher among biopsies from women with higher (n = 122) vs lower (n = 118) breast density on prior mammograms (overall concordance rate, 73% [95% CI, 71%-75%] for higher vs 77% [95% CI, 75%-80%] for lower, P < .001), and among pathologists who interpreted lower weekly case volumes (P < .001) or worked in smaller practices (P = .034) or nonacademic settings (P = .007). In this study of pathologists, in which diagnostic interpretation was based on a single breast biopsy slide, overall agreement between the individual pathologists' interpretations and the expert consensus-derived reference diagnoses was 75.3%, with the highest level of concordance for invasive carcinoma and lower levels of concordance for DCIS and atypia. Further research is needed to understand the relationship of these findings with patient management.

  20. Pointwise probability reinforcements for robust statistical inference.

    PubMed

    Frénay, Benoît; Verleysen, Michel

    2014-02-01

    Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). AFDs are observations that are much more frequent in the training sample that they should be, with respect to their theoretical probability, and include e.g. outliers. Estimates of parameters tend to be biased towards models which support such data. This paper proposes to introduce pointwise probability reinforcements (PPRs): the probability of each observation is reinforced by a PPR and a regularisation allows controlling the amount of reinforcement which compensates for AFDs. The proposed solution is very generic, since it can be used to robustify any statistical inference method which can be formulated as a likelihood maximisation. Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. Moreover, outliers can be filtered manually since an abnormality degree is obtained for each observation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Evaluation of easily measured risk factors in the prediction of osteoporotic fractures

    PubMed Central

    Bensen, Robert; Adachi, Jonathan D; Papaioannou, Alexandra; Ioannidis, George; Olszynski, Wojciech P; Sebaldt, Rolf J; Murray, Timothy M; Josse, Robert G; Brown, Jacques P; Hanley, David A; Petrie, Annie; Puglia, Mark; Goldsmith, Charlie H; Bensen, W

    2005-01-01

    Background Fracture represents the single most important clinical event in patients with osteoporosis, yet remains under-predicted. As few premonitory symptoms for fracture exist, it is of critical importance that physicians effectively and efficiently identify individuals at increased fracture risk. Methods Of 3426 postmenopausal women in CANDOO, 40, 158, 99, and 64 women developed a new hip, vertebral, wrist or rib fracture, respectively. Seven easily measured risk factors predictive of fracture in research trials were examined in clinical practice including: age (<65, 65–69, 70–74, 75–79, 80+ years), rising from a chair with arms (yes, no), weight (< 57, ≥ 57kg), maternal history of hip facture (yes, no), prior fracture after age 50 (yes, no), hip T-score (>-1, -1 to >-2.5, ≤-2.5), and current smoking status (yes, no). Multivariable logistic regression analysis was conducted. Results The inability to rise from a chair without the use of arms (3.58; 95% CI: 1.17, 10.93) was the most significant risk factor for new hip fracture. Notable risk factors for predicting new vertebral fractures were: low body weight (1.57; 95% CI: 1.04, 2.37), current smoking (1.95; 95% CI: 1.20, 3.18) and age between 75–79 years (1.96; 95% CI: 1.10, 3.51). New wrist fractures were significantly identified by low body weight (1.71, 95% CI: 1.01, 2.90) and prior fracture after 50 years (1.96; 95% CI: 1.19, 3.22). Predictors of new rib fractures include a maternal history of a hip facture (2.89; 95% CI: 1.04, 8.08) and a prior fracture after 50 years (2.16; 95% CI: 1.20, 3.87). Conclusion This study has shown that there exists a variety of predictors of future fracture, besides BMD, that can be easily assessed by a physician. The significance of each variable depends on the site of incident fracture. Of greatest interest is that an inability to rise from a chair is perhaps the most readily identifiable significant risk factor for hip fracture and can be easily incorporated into

  2. Blinded interpretation of study results can feasibly and effectively diminish interpretation bias.

    PubMed

    Järvinen, Teppo L N; Sihvonen, Raine; Bhandari, Mohit; Sprague, Sheila; Malmivaara, Antti; Paavola, Mika; Schünemann, Holger J; Guyatt, Gordon H

    2014-07-01

    Controversial and misleading interpretation of data from randomized trials is common. How to avoid misleading interpretation has received little attention. Herein, we describe two applications of an approach that involves blinded interpretation of the results by study investigators. The approach involves developing two interpretations of the results on the basis of a blinded review of the primary outcome data (experimental treatment A compared with control treatment B). One interpretation assumes that A is the experimental intervention and another assumes that A is the control. After agreeing that there will be no further changes, the investigators record their decisions and sign the resulting document. The randomization code is then broken, the correct interpretation chosen, and the manuscript finalized. Review of the document by an external authority before finalization can provide another safeguard against interpretation bias. We found the blinded preparation of a summary of data interpretation described in this article practical, efficient, and useful. Blinded data interpretation may decrease the frequency of misleading data interpretation. Widespread adoption of blinded data interpretation would be greatly facilitated were it added to the minimum set of recommendations outlining proper conduct of randomized controlled trials (eg, the Consolidated Standards of Reporting Trials statement). Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  3. A Framework for Assessing High School Students' Statistical Reasoning.

    PubMed

    Chan, Shiau Wei; Ismail, Zaleha; Sumintono, Bambang

    2016-01-01

    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students' statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students' statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework's cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments.

  4. Targeting Change: Assessing a Faculty Learning Community Focused on Increasing Statistics Content in Life Science Curricula

    ERIC Educational Resources Information Center

    Parker, Loran Carleton; Gleichsner, Alyssa M.; Adedokun, Omolola A.; Forney, James

    2016-01-01

    Transformation of research in all biological fields necessitates the design, analysis and, interpretation of large data sets. Preparing students with the requisite skills in experimental design, statistical analysis, and interpretation, and mathematical reasoning will require both curricular reform and faculty who are willing and able to integrate…

  5. U-interpreter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arvind; Gostelow, K.P.

    1982-02-01

    The author argues that by giving a unique name to every activity generated during a computation, the u-interpreter can provide greater concurrency in the interpretation of data flow graphs. 19 references.

  6. Quantifying Treatment Benefit in Molecular Subgroups to Assess a Predictive Biomarker.

    PubMed

    Iasonos, Alexia; Chapman, Paul B; Satagopan, Jaya M

    2016-05-01

    An increased interest has been expressed in finding predictive biomarkers that can guide treatment options for both mutation carriers and noncarriers. The statistical assessment of variation in treatment benefit (TB) according to the biomarker carrier status plays an important role in evaluating predictive biomarkers. For time-to-event endpoints, the hazard ratio (HR) for interaction between treatment and a biomarker from a proportional hazards regression model is commonly used as a measure of variation in TB. Although this can be easily obtained using available statistical software packages, the interpretation of HR is not straightforward. In this article, we propose different summary measures of variation in TB on the scale of survival probabilities for evaluating a predictive biomarker. The proposed summary measures can be easily interpreted as quantifying differential in TB in terms of relative risk or excess absolute risk due to treatment in carriers versus noncarriers. We illustrate the use and interpretation of the proposed measures with data from completed clinical trials. We encourage clinical practitioners to interpret variation in TB in terms of measures based on survival probabilities, particularly in terms of excess absolute risk, as opposed to HR. Clin Cancer Res; 22(9); 2114-20. ©2016 AACR. ©2016 American Association for Cancer Research.

  7. Making On-line Science Course Materials Easily Translatable and Accessible Worldwide: Challenges and Solutions

    NASA Astrophysics Data System (ADS)

    Adams, Wendy K.; Alhadlaq, Hisham; Malley, Christopher V.; Perkins, Katherine K.; Olson, Jonathan; Alshaya, Fahad; Alabdulkareem, Saleh; Wieman, Carl E.

    2012-02-01

    The PhET Interactive Simulations Project partnered with the Excellence Research Center of Science and Mathematics Education at King Saud University with the joint goal of making simulations useable worldwide. One of the main challenges of this partnership is to make PhET simulations and the website easily translatable into any language. The PhET project team overcame this challenge by creating the Translation Utility. This tool allows a person fluent in both English and another language to easily translate any of the PhET simulations and requires minimal computer expertise. In this paper we discuss the technical issues involved in this software solution, as well as the issues involved in obtaining accurate translations. We share our solutions to many of the unexpected problems we encountered that would apply generally to making on-line scientific course materials available in many different languages, including working with: languages written right-to-left, different character sets, and different conventions for expressing equations, variables, units and scientific notation.

  8. [Sem: a suitable statistical software adaptated for research in oncology].

    PubMed

    Kwiatkowski, F; Girard, M; Hacene, K; Berlie, J

    2000-10-01

    Many softwares have been adapted for medical use; they rarely enable conveniently both data management and statistics. A recent cooperative work ended up in a new software, Sem (Statistics Epidemiology Medicine), which allows data management of trials and, as well, statistical treatments on them. Very convenient, it can be used by non professional in statistics (biologists, doctors, researchers, data managers), since usually (excepted with multivariate models), the software performs by itself the most adequate test, after what complementary tests can be requested if needed. Sem data base manager (DBM) is not compatible with usual DBM: this constitutes a first protection against loss of privacy. Other shields (passwords, cryptage...) strengthen data security, all the more necessary today since Sem can be run on computers nets. Data organization enables multiplicity: forms can be duplicated by patient. Dates are treated in a special but transparent manner (sorting, date and delay calculations...). Sem communicates with common desktop softwares, often with a simple copy/paste. So, statistics can be easily performed on data stored in external calculation sheets, and slides by pasting graphs with a single mouse click (survival curves...). Already used over fifty places in different hospitals for daily work, this product, combining data management and statistics, appears to be a convenient and innovative solution.

  9. Easily Transported CCD Systems for Use in Astronomy Labs

    NASA Astrophysics Data System (ADS)

    Meisel, D.

    1992-12-01

    Relatively inexpensive CCD cameras and portable computers are now easily obtained as commercially available products. I will describe a prototype system that can be used by introductory astronomy students, even urban enviroments, to obtain useful observations of the night sky. It is based on the ST-4 CCDs made by Santa Barbara Instruments Group and Macintosh Powerbook145 computers. Students take outdoor images directly from the college campus, bring the exposures back into the lab and download the images into our networked server. These stored images can then be processed (at a later time) using a variety of image processing programs including a new astronomical version of the popular "freeware" NIH Image package that is currently under development at Geneseo. The prototype of this system will be demonstrated and available for hands-on use during the meeting. This work is supported by NSF ILI Demonstration Grant USE9250493 and Grants from SUNY-GENESEO.

  10. [How to fit and interpret multilevel models using SPSS].

    PubMed

    Pardo, Antonio; Ruiz, Miguel A; San Martín, Rafael

    2007-05-01

    Hierarchic or multilevel models are used to analyse data when cases belong to known groups and sample units are selected both from the individual level and from the group level. In this work, the multilevel models most commonly discussed in the statistic literature are described, explaining how to fit these models using the SPSS program (any version as of the 11 th ) and how to interpret the outcomes of the analysis. Five particular models are described, fitted, and interpreted: (1) one-way analysis of variance with random effects, (2) regression analysis with means-as-outcomes, (3) one-way analysis of covariance with random effects, (4) regression analysis with random coefficients, and (5) regression analysis with means- and slopes-as-outcomes. All models are explained, trying to make them understandable to researchers in health and behaviour sciences.

  11. Use of keyword hierarchies to interpret gene expression patterns.

    PubMed

    Masys, D R; Welsh, J B; Lynn Fink, J; Gribskov, M; Klacansky, I; Corbeil, J

    2001-04-01

    High-density microarray technology permits the quantitative and simultaneous monitoring of thousands of genes. The interpretation challenge is to extract relevant information from this large amount of data. A growing variety of statistical analysis approaches are available to identify clusters of genes that share common expression characteristics, but provide no information regarding the biological similarities of genes within clusters. The published literature provides a potential source of information to assist in interpretation of clustering results. We describe a data mining method that uses indexing terms ('keywords') from the published literature linked to specific genes to present a view of the conceptual similarity of genes within a cluster or group of interest. The method takes advantage of the hierarchical nature of Medical Subject Headings used to index citations in the MEDLINE database, and the registry numbers applied to enzymes.

  12. Statistical prediction with Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1989-01-01

    A new viewpoint of the processing performed by Kanerva's sparse distributed memory (SDM) is presented. In conditions of near- or over-capacity, where the associative-memory behavior of the model breaks down, the processing performed by the model can be interpreted as that of a statistical predictor. Mathematical results are presented which serve as the framework for a new statistical viewpoint of sparse distributed memory and for which the standard formulation of SDM is a special case. This viewpoint suggests possible enhancements to the SDM model, including a procedure for improving the predictiveness of the system based on Holland's work with genetic algorithms, and a method for improving the capacity of SDM even when used as an associative memory.

  13. A Generalized Approach for the Interpretation of Geophysical Well Logs in Ground-Water Studies - Theory and Application

    USGS Publications Warehouse

    Paillet, Frederick L.; Crowder, R.E.

    1996-01-01

    Quantitative analysis of geophysical logs in ground-water studies often involves at least as broad a range of applications and variation in lithology as is typically encountered in petroleum exploration, making such logs difficult to calibrate and complicating inversion problem formulation. At the same time, data inversion and analysis depend on inversion model formulation and refinement, so that log interpretation cannot be deferred to a geophysical log specialist unless active involvement with interpretation can be maintained by such an expert over the lifetime of the project. We propose a generalized log-interpretation procedure designed to guide hydrogeologists in the interpretation of geophysical logs, and in the integration of log data into ground-water models that may be systematically refined and improved in an iterative way. The procedure is designed to maximize the effective use of three primary contributions from geophysical logs: (1) The continuous depth scale of the measurements along the well bore; (2) The in situ measurement of lithologic properties and the correlation with hydraulic properties of the formations over a finite sample volume; and (3) Multiple independent measurements that can potentially be inverted for multiple physical or hydraulic properties of interest. The approach is formulated in the context of geophysical inversion theory, and is designed to be interfaced with surface geophysical soundings and conventional hydraulic testing. The step-by-step procedures given in our generalized interpretation and inversion technique are based on both qualitative analysis designed to assist formulation of the interpretation model, and quantitative analysis used to assign numerical values to model parameters. The approach bases a decision as to whether quantitative inversion is statistically warranted by formulating an over-determined inversion. If no such inversion is consistent with the inversion model, quantitative inversion is judged not

  14. Influences of Radiology Trainees on Screening Mammography Interpretation.

    PubMed

    Hawley, Jeffrey R; Taylor, Clayton R; Cubbison, Alyssa M; Erdal, B Selnur; Yildiz, Vedat O; Carkaci, Selin

    2016-05-01

    Participation of radiology trainees in screening mammographic interpretation is a critical component of radiology residency and fellowship training. The aim of this study was to investigate and quantify the effects of trainee involvement on screening mammographic interpretation and diagnostic outcomes. Screening mammograms interpreted at an academic medical center by six dedicated breast imagers over a three-year period were identified, with cases interpreted by an attending radiologist alone or in conjunction with a trainee. Trainees included radiology residents, breast imaging fellows, and fellows from other radiology subspecialties during breast imaging rotations. Trainee participation, patient variables, results of diagnostic evaluations, and pathology were recorded. A total of 47,914 mammograms from 34,867 patients were included, with an overall recall rate for attending radiologists reading alone of 14.7% compared with 18.0% when involving a trainee (P < .0001). Overall cancer detection rate for attending radiologists reading alone was 5.7 per 1,000 compared with 5.2 per 1,000 when reading with a trainee (P = .517). When reading with a trainee, dense breasts represented a greater portion of recalls (P = .0001), and more frequently, greater than one abnormality was described in the breast (P = .013). Detection of ductal carcinoma in situ versus invasive carcinoma or invasive cancer type was not significantly different. The mean size of cancers in patients recalled by attending radiologists alone was smaller, and nodal involvement was less frequent, though not statistically significantly. These results demonstrate a significant overall increase in recall rate when interpreting screening mammograms with radiology trainees, with no change in cancer detection rate. Radiology faculty members should be aware of this potentiality and mitigate tendencies toward greater false positives. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All

  15. Three Insights from a Bayesian Interpretation of the One-Sided "P" Value

    ERIC Educational Resources Information Center

    Marsman, Maarten; Wagenmakers, Eric-Jan

    2017-01-01

    P values have been critiqued on several grounds but remain entrenched as the dominant inferential method in the empirical sciences. In this article, we elaborate on the fact that in many statistical models, the one-sided "P" value has a direct Bayesian interpretation as the approximate posterior mass for values lower than zero. The…

  16. Applying Descriptive Statistics to Teaching the Regional Classification of Climate.

    ERIC Educational Resources Information Center

    Lindquist, Peter S.; Hammel, Daniel J.

    1998-01-01

    Describes an exercise for college and high school students that relates descriptive statistics to the regional climatic classification. The exercise introduces students to simple calculations of central tendency and dispersion, the construction and interpretation of scatterplots, and the definition of climatic regions. Forces students to engage…

  17. Applications of quantum entropy to statistics

    NASA Astrophysics Data System (ADS)

    Silver, R. N.; Martz, H. F.

    This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.

  18. Counterbalancing and Other Uses of Repeated-Measures Latin-Square Designs: Analyses and Interpretations.

    ERIC Educational Resources Information Center

    Reese, Hayne W.

    1997-01-01

    Recommends that when repeated-measures Latin-square designs are used to counterbalance treatments across a procedural variable or to reduce the number of treatment combinations given to each participant, effects be analyzed statistically, and that in all uses, researchers consider alternative interpretations of the variance associated with the…

  19. Exploratory Visual Analysis of Statistical Results from Microarray Experiments Comparing High and Low Grade Glioma

    PubMed Central

    Reif, David M.; Israel, Mark A.; Moore, Jason H.

    2007-01-01

    The biological interpretation of gene expression microarray results is a daunting challenge. For complex diseases such as cancer, wherein the body of published research is extensive, the incorporation of expert knowledge provides a useful analytical framework. We have previously developed the Exploratory Visual Analysis (EVA) software for exploring data analysis results in the context of annotation information about each gene, as well as biologically relevant groups of genes. We present EVA as a flexible combination of statistics and biological annotation that provides a straightforward visual interface for the interpretation of microarray analyses of gene expression in the most commonly occuring class of brain tumors, glioma. We demonstrate the utility of EVA for the biological interpretation of statistical results by analyzing publicly available gene expression profiles of two important glial tumors. The results of a statistical comparison between 21 malignant, high-grade glioblastoma multiforme (GBM) tumors and 19 indolent, low-grade pilocytic astrocytomas were analyzed using EVA. By using EVA to examine the results of a relatively simple statistical analysis, we were able to identify tumor class-specific gene expression patterns having both statistical and biological significance. Our interactive analysis highlighted the potential importance of genes involved in cell cycle progression, proliferation, signaling, adhesion, migration, motility, and structure, as well as candidate gene loci on a region of Chromosome 7 that has been implicated in glioma. Because EVA does not require statistical or computational expertise and has the flexibility to accommodate any type of statistical analysis, we anticipate EVA will prove a useful addition to the repertoire of computational methods used for microarray data analysis. EVA is available at no charge to academic users and can be found at http://www.epistasis.org. PMID:19390666

  20. Using assemblage data in ecological indicators: A comparison and evaluation of commonly available statistical tools

    USGS Publications Warehouse

    Smith, Joseph M.; Mather, Martha E.

    2012-01-01

    Ecological indicators are science-based tools used to assess how human activities have impacted environmental resources. For monitoring and environmental assessment, existing species assemblage data can be used to make these comparisons through time or across sites. An impediment to using assemblage data, however, is that these data are complex and need to be simplified in an ecologically meaningful way. Because multivariate statistics are mathematical relationships, statistical groupings may not make ecological sense and will not have utility as indicators. Our goal was to define a process to select defensible and ecologically interpretable statistical simplifications of assemblage data in which researchers and managers can have confidence. For this, we chose a suite of statistical methods, compared the groupings that resulted from these analyses, identified convergence among groupings, then we interpreted the groupings using species and ecological guilds. When we tested this approach using a statewide stream fish dataset, not all statistical methods worked equally well. For our dataset, logistic regression (Log), detrended correspondence analysis (DCA), cluster analysis (CL), and non-metric multidimensional scaling (NMDS) provided consistent, simplified output. Specifically, the Log, DCA, CL-1, and NMDS-1 groupings were ≥60% similar to each other, overlapped with the fluvial-specialist ecological guild, and contained a common subset of species. Groupings based on number of species (e.g., Log, DCA, CL and NMDS) outperformed groupings based on abundance [e.g., principal components analysis (PCA) and Poisson regression]. Although the specific methods that worked on our test dataset have generality, here we are advocating a process (e.g., identifying convergent groupings with redundant species composition that are ecologically interpretable) rather than the automatic use of any single statistical tool. We summarize this process in step-by-step guidance for the

  1. Trends in Fertility in the United States. Vital and Health Statistics, Data from the National Vital Statistics System. Series 21, Number 28.

    ERIC Educational Resources Information Center

    Taffel, Selma

    This report presents and interprets birth statistics for the United States with particular emphasis on changes that took place during the period 1970-73. Data for the report were based on information entered on birth certificates collected from all states. The majority of the document comprises graphs and tables of data, but there are four short…

  2. iCFD: Interpreted Computational Fluid Dynamics - Degeneration of CFD to one-dimensional advection-dispersion models using statistical experimental design - The secondary clarifier.

    PubMed

    Guyonvarch, Estelle; Ramin, Elham; Kulahci, Murat; Plósz, Benedek Gy

    2015-10-15

    The present study aims at using statistically designed computational fluid dynamics (CFD) simulations as numerical experiments for the identification of one-dimensional (1-D) advection-dispersion models - computationally light tools, used e.g., as sub-models in systems analysis. The objective is to develop a new 1-D framework, referred to as interpreted CFD (iCFD) models, in which statistical meta-models are used to calculate the pseudo-dispersion coefficient (D) as a function of design and flow boundary conditions. The method - presented in a straightforward and transparent way - is illustrated using the example of a circular secondary settling tank (SST). First, the significant design and flow factors are screened out by applying the statistical method of two-level fractional factorial design of experiments. Second, based on the number of significant factors identified through the factor screening study and system understanding, 50 different sets of design and flow conditions are selected using Latin Hypercube Sampling (LHS). The boundary condition sets are imposed on a 2-D axi-symmetrical CFD simulation model of the SST. In the framework, to degenerate the 2-D model structure, CFD model outputs are approximated by the 1-D model through the calibration of three different model structures for D. Correlation equations for the D parameter then are identified as a function of the selected design and flow boundary conditions (meta-models), and their accuracy is evaluated against D values estimated in each numerical experiment. The evaluation and validation of the iCFD model structure is carried out using scenario simulation results obtained with parameters sampled from the corners of the LHS experimental region. For the studied SST, additional iCFD model development was carried out in terms of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii

  3. Directionality effects in simultaneous language interpreting: the case of sign language interpreters in The Netherlands.

    PubMed

    Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan

    2011-01-01

    The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.

  4. CRN5EXP: Expert system for statistical quality control

    NASA Technical Reports Server (NTRS)

    Hentea, Mariana

    1991-01-01

    The purpose of the Expert System CRN5EXP is to assist in checking the quality of the coils at two very important mills: Hot Rolling and Cold Rolling in a steel plant. The system interprets the statistical quality control charts, diagnoses and predicts the quality of the steel. Measurements of process control variables are recorded in a database and sample statistics such as the mean and the range are computed and plotted on a control chart. The chart is analyzed through patterns using the C Language Integrated Production System (CLIPS) and a forward chaining technique to reach a conclusion about the causes of defects and to take management measures for the improvement of the quality control techniques. The Expert System combines the certainty factors associated with the process control variables to predict the quality of the steel. The paper presents the approach to extract data from the database, the reason to combine certainty factors, the architecture and the use of the Expert System. However, the interpretation of control charts patterns requires the human expert's knowledge and lends to Expert Systems rules.

  5. Experimental Design and Interpretation of Functional Neuroimaging Studies of Cognitive Processes

    PubMed Central

    Caplan, David

    2008-01-01

    This article discusses how the relation between experimental and baseline conditions in functional neuroimaging studies affects the conclusions that can be drawn from a study about the neural correlates of components of the cognitive system and about the nature and organization of those components. I argue that certain designs in common use—in particular the contrast of qualitatively different representations that are processed at parallel stages of a functional architecture—can never identify the neural basis of a cognitive operation and have limited use in providing information about the nature of cognitive systems. Other types of designs—such as ones that contrast representations that are computed in immediately sequential processing steps and ones that contrast qualitatively similar representations that are parametrically related within a single processing stage—are more easily interpreted. PMID:17979122

  6. Which statistics should tropical biologists learn?

    PubMed

    Loaiza Velásquez, Natalia; González Lutz, María Isabel; Monge-Nájera, Julián

    2011-09-01

    Tropical biologists study the richest and most endangered biodiversity in the planet, and in these times of climate change and mega-extinctions, the need for efficient, good quality research is more pressing than in the past. However, the statistical component in research published by tropical authors sometimes suffers from poor quality in data collection; mediocre or bad experimental design and a rigid and outdated view of data analysis. To suggest improvements in their statistical education, we listed all the statistical tests and other quantitative analyses used in two leading tropical journals, the Revista de Biología Tropical and Biotropica, during a year. The 12 most frequent tests in the articles were: Analysis of Variance (ANOVA), Chi-Square Test, Student's T Test, Linear Regression, Pearson's Correlation Coefficient, Mann-Whitney U Test, Kruskal-Wallis Test, Shannon's Diversity Index, Tukey's Test, Cluster Analysis, Spearman's Rank Correlation Test and Principal Component Analysis. We conclude that statistical education for tropical biologists must abandon the old syllabus based on the mathematical side of statistics and concentrate on the correct selection of these and other procedures and tests, on their biological interpretation and on the use of reliable and friendly freeware. We think that their time will be better spent understanding and protecting tropical ecosystems than trying to learn the mathematical foundations of statistics: in most cases, a well designed one-semester course should be enough for their basic requirements.

  7. Statistical behavior of the tensile property of heated cotton fiber

    USDA-ARS?s Scientific Manuscript database

    The temperature dependence of the tensile property of single cotton fiber was studied in the range of 160-300°C using Favimat test, and its statistical behavior was interpreted in terms of structural changes. The tenacity of control cotton fiber was well described by the single Weibull distribution,...

  8. Statistical modeling of natural backgrounds in hyperspectral LWIR data

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Manolakis, Dimitris; Cooley, Thomas; Meola, Joseph

    2016-09-01

    Hyperspectral sensors operating in the long wave infrared (LWIR) have a wealth of applications including remote material identification and rare target detection. While statistical models for modeling surface reflectance in visible and near-infrared regimes have been well studied, models for the temperature and emissivity in the LWIR have not been rigorously investigated. In this paper, we investigate modeling hyperspectral LWIR data using a statistical mixture model for the emissivity and surface temperature. Statistical models for the surface parameters can be used to simulate surface radiances and at-sensor radiance which drives the variability of measured radiance and ultimately the performance of signal processing algorithms. Thus, having models that adequately capture data variation is extremely important for studying performance trades. The purpose of this paper is twofold. First, we study the validity of this model using real hyperspectral data, and compare the relative variability of hyperspectral data in the LWIR and visible and near-infrared (VNIR) regimes. Second, we illustrate how materials that are easily distinguished in the VNIR, may be difficult to separate when imaged in the LWIR.

  9. Pulsar statistics and their interpretations

    NASA Technical Reports Server (NTRS)

    Arnett, W. D.; Lerche, I.

    1981-01-01

    It is shown that a lack of knowledge concerning interstellar electron density, the true spatial distribution of pulsars, the radio luminosity source distribution of pulsars, the real ages and real aging rates of pulsars, the beaming factor (and other unknown factors causing the known sample of about 350 pulsars to be incomplete to an unknown degree) is sufficient to cause a minimum uncertainty of a factor of 20 in any attempt to determine pulsar birth or death rates in the Galaxy. It is suggested that this uncertainty must impact on suggestions that the pulsar rates can be used to constrain possible scenarios for neutron star formation and stellar evolution in general.

  10. GoCxx: a tool to easily leverage C++ legacy code for multicore-friendly Go libraries and frameworks

    NASA Astrophysics Data System (ADS)

    Binet, Sébastien

    2012-12-01

    Current HENP libraries and frameworks were written before multicore systems became widely deployed and used. From this environment, a ‘single-thread’ processing model naturally emerged but the implicit assumptions it encouraged are greatly impairing our abilities to scale in a multicore/manycore world. Writing scalable code in C++ for multicore architectures, while doable, is no panacea. Sure, C++11 will improve on the current situation (by standardizing on std::thread, introducing lambda functions and defining a memory model) but it will do so at the price of complicating further an already quite sophisticated language. This level of sophistication has probably already strongly motivated analysis groups to migrate to CPython, hoping for its current limitations with respect to multicore scalability to be either lifted (Grand Interpreter Lock removal) or for the advent of a new Python VM better tailored for this kind of environment (PyPy, Jython, …) Could HENP migrate to a language with none of the deficiencies of C++ (build time, deployment, low level tools for concurrency) and with the fast turn-around time, simplicity and ease of coding of Python? This paper will try to make the case for Go - a young open source language with built-in facilities to easily express and expose concurrency - being such a language. We introduce GoCxx, a tool leveraging gcc-xml's output to automatize the tedious work of creating Go wrappers for foreign languages, a critical task for any language wishing to leverage legacy and field-tested code. We will conclude with the first results of applying GoCxx to real C++ code.

  11. A Framework for Assessing High School Students' Statistical Reasoning

    PubMed Central

    2016-01-01

    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students’ statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students’ statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework’s cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments. PMID:27812091

  12. Effects of personal experiences on the interpretation of the meaning of colours used in the displays and controls in electric control panels.

    PubMed

    Lee, Inseok; Hwang, Won-Gue

    2015-01-01

    A survey was conducted to examine how personal experiences affect the interpretation of the meaning of display and control colours on electric control panels (ECPs). In Korea, the red light on ECPs represents a normal state of operation, while the green light represents a stopped state of operation; this appears to contradict the general stereotypes surrounding these colours. The survey results indicated that the participants who had experience in using ECPs interpreted the colour meaning differently from the other participant group. More than half of the experienced participants regarded the coloured displays and controls as they were designed, while most participants in the other group appeared to interpret the colours in accordance with the stereotypes. It is presumed that accidents related to human errors can occur when non-experienced people use the ECPs, which are easily accessible in many buildings. Practitioner Summary: A survey was conducted to investigate how personal experiences affect the interpretation of the function meanings of coloured lights on electrical control panels. It was found that the interpretation varies according to personal experiences, which can induce accidents related to human errors while operating electrical equipment.

  13. Hunting statistics: what data for what use? An account of an international workshop

    USGS Publications Warehouse

    Nichols, J.D.; Lancia, R.A.; Lebreton, J.D.

    2001-01-01

    Hunting interacts with the underlying dynamics of game species in several different ways and is, at the same time, a source of valuable information not easily obtained from populations that are not subjected to hunting. Specific questions, including the sustainability of hunting activities, can be addressed using hunting statistics. Such investigations will frequently require that hunting statistics be combined with data from other sources of population-level information. Such reflections served as a basis for the meeting, ?Hunting Statistics: What Data for What Use,? held on January 15-18, 2001 in Saint-Benoist, France. We review here the 20 talks held during the workshop and the contribution of hunting statistics to our knowledge of the population dynamics of game species. Three specific topics (adaptive management, catch-effort models, and dynamics of exploited populations) were highlighted as important themes and are more extensively presented as boxes.

  14. Quality Metrics Of Digitally Derived Imagery And Their Relation To Interpreter Performance

    NASA Astrophysics Data System (ADS)

    Burke, James J.; Snyder, Harry L.

    1981-12-01

    Two hundred-fifty transparencies, displaying a new digital database consisting of 25 degraded versions (5 blur levels x 5 noise levels) of each of 10 digitized, first-generation positive transparencies, were used in two experiments involving 15 trained military photo-interpreters. Each image is 86 mm square and represents 40962 8-bit pixels. In the "interpretation" experiment, each photo-interpreter (judge) spent approximately two days extracting Essential Elements of Information (EEI's) from one degraded version of each scene at a constant blur level (FWHM = 40, 84 or 322 μm). In the scaling experiment, each judge assigned a numerical value to each of the 250 images, according to its perceived position on a 10-point NATO-standardized scale (0 = useless through 9 = nearly perfect), to the nearest 0.1 unit. Eighty-eight of the 100 possible values were used by the judges, indicating that 62 categories are needed to scale these hardcopy images. The overall correlation between the scaling and interpretation results was 0.9. Though the main effect of blur was not significant (p = 0.146) in the interpretation experiment, that of noise was significant (p = 0.005), and all main factors (blur, noise, scene, order of battle) and most interactions were statistically significant in the scaling experiment.

  15. Statistical Signal Models and Algorithms for Image Analysis

    DTIC Science & Technology

    1984-10-25

    In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction

  16. Making On-Line Science Course Materials Easily Translatable and Accessible Worldwide: Challenges and Solutions

    ERIC Educational Resources Information Center

    Adams, Wendy K.; Alhadlaq, Hisham; Malley, Christopher V.; Perkins, Katherine K.; Olson, Jonathan; Alshaya, Fahad; Alabdulkareem, Saleh; Wieman, Carl E.

    2012-01-01

    The PhET Interactive Simulations Project partnered with the Excellence Research Center of Science and Mathematics Education at King Saud University with the joint goal of making simulations useable worldwide. One of the main challenges of this partnership is to make PhET simulations and the website easily translatable into any language. The PhET…

  17. The crossing statistic: dealing with unknown errors in the dispersion of Type Ia supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafieloo, Arman; Clifton, Timothy; Ferreira, Pedro, E-mail: arman@ewha.ac.kr, E-mail: tclifton@astro.ox.ac.uk, E-mail: p.ferreira1@physics.ox.ac.uk

    2011-08-01

    We propose a new statistic that has been designed to be used in situations where the intrinsic dispersion of a data set is not well known: The Crossing Statistic. This statistic is in general less sensitive than χ{sup 2} to the intrinsic dispersion of the data, and hence allows us to make progress in distinguishing between different models using goodness of fit to the data even when the errors involved are poorly understood. The proposed statistic makes use of the shape and trends of a model's predictions in a quantifiable manner. It is applicable to a variety of circumstances, althoughmore » we consider it to be especially well suited to the task of distinguishing between different cosmological models using type Ia supernovae. We show that this statistic can easily distinguish between different models in cases where the χ{sup 2} statistic fails. We also show that the last mode of the Crossing Statistic is identical to χ{sup 2}, so that it can be considered as a generalization of χ{sup 2}.« less

  18. Statistical models of lunar rocks and regolith

    NASA Technical Reports Server (NTRS)

    Marcus, A. H.

    1973-01-01

    The mathematical, statistical, and computational approaches used in the investigation of the interrelationship of lunar fragmental material, regolith, lunar rocks, and lunar craters are described. The first two phases of the work explored the sensitivity of the production model of fragmental material to mathematical assumptions, and then completed earlier studies on the survival of lunar surface rocks with respect to competing processes. The third phase combined earlier work into a detailed statistical analysis and probabilistic model of regolith formation by lithologically distinct layers, interpreted as modified crater ejecta blankets. The fourth phase of the work dealt with problems encountered in combining the results of the entire project into a comprehensive, multipurpose computer simulation model for the craters and regolith. Highlights of each phase of research are given.

  19. XGR software for enhanced interpretation of genomic summary data, illustrated by application to immunological traits.

    PubMed

    Fang, Hai; Knezevic, Bogdan; Burnham, Katie L; Knight, Julian C

    2016-12-13

    Biological interpretation of genomic summary data such as those resulting from genome-wide association studies (GWAS) and expression quantitative trait loci (eQTL) studies is one of the major bottlenecks in medical genomics research, calling for efficient and integrative tools to resolve this problem. We introduce eXploring Genomic Relations (XGR), an open source tool designed for enhanced interpretation of genomic summary data enabling downstream knowledge discovery. Targeting users of varying computational skills, XGR utilises prior biological knowledge and relationships in a highly integrated but easily accessible way to make user-input genomic summary datasets more interpretable. We show how by incorporating ontology, annotation, and systems biology network-driven approaches, XGR generates more informative results than conventional analyses. We apply XGR to GWAS and eQTL summary data to explore the genomic landscape of the activated innate immune response and common immunological diseases. We provide genomic evidence for a disease taxonomy supporting the concept of a disease spectrum from autoimmune to autoinflammatory disorders. We also show how XGR can define SNP-modulated gene networks and pathways that are shared and distinct between diseases, how it achieves functional, phenotypic and epigenomic annotations of genes and variants, and how it enables exploring annotation-based relationships between genetic variants. XGR provides a single integrated solution to enhance interpretation of genomic summary data for downstream biological discovery. XGR is released as both an R package and a web-app, freely available at http://galahad.well.ox.ac.uk/XGR .

  20. Interpretation of FTIR spectra of polymers and Raman spectra of car paints by means of likelihood ratio approach supported by wavelet transform for reducing data dimensionality.

    PubMed

    Martyna, Agnieszka; Michalska, Aleksandra; Zadora, Grzegorz

    2015-05-01

    The problem of interpretation of common provenance of the samples within the infrared spectra database of polypropylene samples from car body parts and plastic containers as well as Raman spectra databases of blue solid and metallic automotive paints was under investigation. The research involved statistical tools such as likelihood ratio (LR) approach for expressing the evidential value of observed similarities and differences in the recorded spectra. Since the LR models can be easily proposed for databases described by a few variables, research focused on the problem of spectra dimensionality reduction characterised by more than a thousand variables. The objective of the studies was to combine the chemometric tools easily dealing with multidimensionality with an LR approach. The final variables used for LR models' construction were derived from the discrete wavelet transform (DWT) as a data dimensionality reduction technique supported by methods for variance analysis and corresponded with chemical information, i.e. typical absorption bands for polypropylene and peaks associated with pigments present in the car paints. Univariate and multivariate LR models were proposed, aiming at obtaining more information about the chemical structure of the samples. Their performance was controlled by estimating the levels of false positive and false negative answers and using the empirical cross entropy approach. The results for most of the LR models were satisfactory and enabled solving the stated comparison problems. The results prove that the variables generated from DWT preserve signal characteristic, being a sparse representation of the original signal by keeping its shape and relevant chemical information.

  1. Techniques in teaching statistics : linking research production and research use.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez-Moyano, I .; Smith, A.; Univ. of Massachusetts at Boston)

    In the spirit of closing the 'research-practice gap,' the authors extend evidence-based principles to statistics instruction in social science graduate education. The authors employ a Delphi method to survey experienced statistics instructors to identify teaching techniques to overcome the challenges inherent in teaching statistics to students enrolled in practitioner-oriented master's degree programs. Among the teaching techniques identi?ed as essential are using real-life examples, requiring data collection exercises, and emphasizing interpretation rather than results. Building on existing research, preliminary interviews, and the ?ndings from the study, the authors develop a model describing antecedents to the strength of the link between researchmore » and practice.« less

  2. US Geological Survey nutrient preservation experiment : experimental design, statistical analysis, and interpretation of analytical results

    USGS Publications Warehouse

    Patton, Charles J.; Gilroy, Edward J.

    1999-01-01

    Data on which this report is based, including nutrient concentrations in synthetic reference samples determined concurrently with those in real samples, are extensive (greater than 20,000 determinations) and have been published separately. In addition to confirming the well-documented instability of nitrite in acidified samples, this study also demonstrates that when biota are removed from samples at collection sites by 0.45-micrometer membrane filtration, subsequent preservation with sulfuric acid or mercury (II) provides no statistically significant improvement in nutrient concentration stability during storage at 4 degrees Celsius for 30 days. Biocide preservation had no statistically significant effect on the 30-day stability of phosphorus concentrations in whole-water splits from any of the 15 stations, but did stabilize Kjeldahl nitrogen concentrations in whole-water splits from three data-collection stations where ammonium accounted for at least half of the measured Kjeldahl nitrogen.

  3. Enhancing the Development of Statistical Literacy through the Robot Bioglyph

    ERIC Educational Resources Information Center

    Bragg, Leicha A.; Koch, Jessica; Willis, Ashley

    2017-01-01

    One way to heighten students' interest in the classroom is by personalising tasks. Through designing Robot Bioglyphs students are able to explore personalised data through a creative and engaging process. By understanding, producing and interpreting data, students are able to develop their statistical literacy, which is an essential skill in…

  4. Assistive Technologies for Second-Year Statistics Students Who Are Blind

    ERIC Educational Resources Information Center

    Erhardt, Robert J.; Shuman, Michael P.

    2015-01-01

    At Wake Forest University, a student who is blind enrolled in a second course in statistics. The course covered simple and multiple regression, model diagnostics, model selection, data visualization, and elementary logistic regression. These topics required that the student both interpret and produce three sets of materials: mathematical writing,…

  5. Diagnostic Concordance Among Pathologists Interpreting Breast Biopsy Specimens

    PubMed Central

    Elmore, Joann G.; Longton, Gary M.; Carney, Patricia A.; Geller, Berta M.; Onega, Tracy; Tosteson, Anna N. A.; Nelson, Heidi D.; Pepe, Margaret S.; Allison, Kimberly H.; Schnitt, Stuart J.; O’Malley, Frances P.; Weaver, Donald L.

    2015-01-01

    15) Invasive carcinoma 663 96 (94–97) 4 (3–6) Disagreement with the reference diagnosis was statistically significantly higher among biopsies from women with higher (n = 122) vs lower (n = 118) breast density on prior mammograms (overall concordance rate, 73% [95% CI, 71%–75%] for higher vs 77% [95% CI, 75%–80%] for lower, P < .001), and among pathologists who interpreted lower weekly case volumes (P < .001) or worked in smaller practices (P = .034) or nonacademic settings (P = .007). CONCLUSIONS AND RELEVANCE In this study of pathologists, in which diagnostic interpretation was based on a single breast biopsy slide, overall agreement between the individual pathologists’ interpretations and the expert consensus–derived reference diagnoses was 75.3%, with the highest level of concordance for invasive carcinoma and lower levels of concordance for DCIS and atypia. Further research is needed to understand the relationship of these findings with patient management. PMID:25781441

  6. Effects of Matching Multiple Memory Strategies with Computer-Assisted Instruction on Students' Statistics Learning Achievement

    ERIC Educational Resources Information Center

    Liao, Ying; Lin, Wen-He

    2016-01-01

    In the era when digitalization is pursued, numbers are the major medium of information performance and statistics is the primary instrument to interpret and analyze numerical information. For this reason, the cultivation of fundamental statistical literacy should be a key in the learning area of mathematics at the stage of compulsory education.…

  7. A system of registration and statistics.

    PubMed

    Blayo, C

    1993-06-01

    In 1971, WHO recommended obligatory reporting to countries preparing to legalize induced abortion, however, there is no registration of abortions in Austria. Greece, Luxembourg, and Portugal, or in Northern Ireland, Ireland, and Malta, where abortion is prohibited, or in Switzerland, where it is limited. Albania is preparing to institute such a provision. Registration is not always complete in Germany, France, Italy, Poland, and Spain, and in the republics of the former USSR, particularly Lithuania. The data gathered are often further impoverished at the stage of the publication of the statistics. Certain estimations, or even results of surveys, make up for these shortcomings. A retrospective survey of a sample representing all women age 15 years or older would allow the reconstruction of statistics on abortions of past years. Systematic registration must be accompanied by the publication of a statistical record. Sterilization appears to be spreading in Europe, but it is only very rarely registered. The proportion of couples sterilized is sometimes obtained by surveys, but there is hardly any information on the characteristics of this group. On the other hand, the practice of contraception can be easily assessed, as in the majority of countries contraceptives are dispensed through pharmacies, public family planning centers, and private practitioners. Family planning centers sometimes are sources of statistical data. In some countries producers' associations make statistics available on the sale of contraceptives. Exact surveys facilitate the characterization of the users and reveal the methods they employ. Many countries carried out such surveys at the end of the 1970s under the aegis of world fertility surveys. It is urgent to invest in data collection suitable for learning the proportion of women who utilize each method of contraception in all the countries of Europe.

  8. A bird's eye view: the cognitive strategies of experts interpreting seismic profiles

    NASA Astrophysics Data System (ADS)

    Bond, C. E.; Butler, R.

    2012-12-01

    Geoscience is perhaps unique in its reliance on incomplete datasets and building knowledge from their interpretation. This interpretation basis for the science is fundamental at all levels; from creation of a geological map to interpretation of remotely sensed data. To teach and understand better the uncertainties in dealing with incomplete data we need to understand the strategies individual practitioners deploy that make them effective interpreters. The nature of interpretation is such that the interpreter needs to use their cognitive ability in the analysis of the data to propose a sensible solution in their final output that is both consistent not only with the original data but also with other knowledge and understanding. In a series of experiments Bond et al. (2007, 2008, 2011, 2012) investigated the strategies and pitfalls of expert and non-expert interpretation of seismic images. These studies focused on large numbers of participants to provide a statistically sound basis for analysis of the results. The outcome of these experiments showed that techniques and strategies are more important than expert knowledge per se in developing successful interpretations. Experts are successful because of their application of these techniques. In a new set of experiments we have focused on a small number of experts to determine how they use their cognitive and reasoning skills, in the interpretation of 2D seismic profiles. Live video and practitioner commentary were used to track the evolving interpretation and to gain insight on their decision processes. The outputs of the study allow us to create an educational resource of expert interpretation through online video footage and commentary with associated further interpretation and analysis of the techniques and strategies employed. This resource will be of use to undergraduate, post-graduate, industry and academic professionals seeking to improve their seismic interpretation skills, develop reasoning strategies for

  9. Easily installable behavioral monitoring system with electric field sensor.

    PubMed

    Tsukamoto, Sosuke; Machida, Yuichiro; Kameda, Noriyuki; Hoshino, Hiroshi; Tamura, Toshiyo

    2007-01-01

    This paper describes a wireless behavioral monitoring system equipped with an electric field sensor. The sensor unit was designed to obtain information regarding the usage of home electric appliances such as the television, microwave oven, coffee maker, etc. by measuring the electric field surrounding them. It is assumed that these usage statistics could provide information regarding the indoor behavior of a subject. Since the sensor can be used by simply attaching it to an appliance and does not require any wiring for its installation, this system can be temporarily installed in any ordinary house. A simple interface for selecting the threshold value of appliances' power on/off states was introduced. The experimental results reveal that the proposed system can be installed by individuals in their residences in a short time and the usage statistics of home appliances can be gathered.

  10. Interpretation biases in paranoia.

    PubMed

    Savulich, George; Freeman, Daniel; Shergill, Sukhi; Yiend, Jenny

    2015-01-01

    Information in the environment is frequently ambiguous in meaning. Emotional ambiguity, such as the stare of a stranger, or the scream of a child, encompasses possible good or bad emotional consequences. Those with elevated vulnerability to affective disorders tend to interpret such material more negatively than those without, a phenomenon known as "negative interpretation bias." In this study we examined the relationship between vulnerability to psychosis, measured by trait paranoia, and interpretation bias. One set of material permitted broadly positive/negative (valenced) interpretations, while another allowed more or less paranoid interpretations, allowing us to also investigate the content specificity of interpretation biases associated with paranoia. Regression analyses (n=70) revealed that trait paranoia, trait anxiety, and cognitive inflexibility predicted paranoid interpretation bias, whereas trait anxiety and cognitive inflexibility predicted negative interpretation bias. In a group comparison those with high levels of trait paranoia were negatively biased in their interpretations of ambiguous information relative to those with low trait paranoia, and this effect was most pronounced for material directly related to paranoid concerns. Together these data suggest that a negative interpretation bias occurs in those with elevated vulnerability to paranoia, and that this bias may be strongest for material matching paranoid beliefs. We conclude that content-specific biases may be important in the cause and maintenance of paranoid symptoms. Copyright © 2014. Published by Elsevier Ltd.

  11. A Practical Guide to Check the Consistency of Item Response Patterns in Clinical Research Through Person-Fit Statistics: Examples and a Computer Program.

    PubMed

    Meijer, Rob R; Niessen, A Susan M; Tendeiro, Jorge N

    2016-02-01

    Although there are many studies devoted to person-fit statistics to detect inconsistent item score patterns, most studies are difficult to understand for nonspecialists. The aim of this tutorial is to explain the principles of these statistics for researchers and clinicians who are interested in applying these statistics. In particular, we first explain how invalid test scores can be detected using person-fit statistics; second, we provide the reader practical examples of existing studies that used person-fit statistics to detect and to interpret inconsistent item score patterns; and third, we discuss a new R-package that can be used to identify and interpret inconsistent score patterns. © The Author(s) 2015.

  12. Using holistic interpretive synthesis to create practice-relevant guidance for person-centred fundamental care delivered by nurses.

    PubMed

    Feo, Rebecca; Conroy, Tiffany; Marshall, Rhianon J; Rasmussen, Philippa; Wiechula, Richard; Kitson, Alison L

    2017-04-01

    Nursing policy and healthcare reform are focusing on two, interconnected areas: person-centred care and fundamental care. Each initiative emphasises a positive nurse-patient relationship. For these initiatives to work, nurses require guidance for how they can best develop and maintain relationships with their patients in practice. Although empirical evidence on the nurse-patient relationship is increasing, findings derived from this research are not readily or easily transferable to the complexities and diversities of nursing practice. This study describes a novel methodological approach, called holistic interpretive synthesis (HIS), for interpreting empirical research findings to create practice-relevant recommendations for nurses. Using HIS, umbrella review findings on the nurse-patient relationship are interpreted through the lens of the Fundamentals of Care Framework. The recommendations for the nurse-patient relationship created through this approach can be used by nurses to establish, maintain and evaluate therapeutic relationships with patients to deliver person-centred fundamental care. Future research should evaluate the validity and impact of these recommendations and test the feasibility of using HIS for other areas of nursing practice and further refine the approach. © 2016 John Wiley & Sons Ltd.

  13. Do doctors need statistics? Doctors' use of and attitudes to probability and statistics.

    PubMed

    Swift, Louise; Miles, Susan; Price, Gill M; Shepstone, Lee; Leinster, Sam J

    2009-07-10

    There is little published evidence on what doctors do in their work that requires probability and statistics, yet the General Medical Council (GMC) requires new doctors to have these skills. This study investigated doctors' use of and attitudes to probability and statistics with a view to informing undergraduate teaching.An email questionnaire was sent to 473 clinicians with an affiliation to the University of East Anglia's Medical School.Of 130 respondents approximately 90 per cent of doctors who performed each of the following activities found probability and statistics useful for that activity: accessing clinical guidelines and evidence summaries, explaining levels of risk to patients, assessing medical marketing and advertising material, interpreting the results of a screening test, reading research publications for general professional interest, and using research publications to explore non-standard treatment and management options.Seventy-nine per cent (103/130, 95 per cent CI 71 per cent, 86 per cent) of participants considered probability and statistics important in their work. Sixty-three per cent (78/124, 95 per cent CI 54 per cent, 71 per cent) said that there were activities that they could do better or start doing if they had an improved understanding of these areas and 74 of these participants elaborated on this. Themes highlighted by participants included: being better able to critically evaluate other people's research; becoming more research-active, having a better understanding of risk; and being better able to explain things to, or teach, other people.Our results can be used to inform how probability and statistics should be taught to medical undergraduates and should encourage today's medical students of the subjects' relevance to their future careers. Copyright 2009 John Wiley & Sons, Ltd.

  14. Impact of clinical history on chest radiograph interpretation.

    PubMed

    Test, Matthew; Shah, Samir S; Monuteaux, Michael; Ambroggio, Lilliam; Lee, Edward Y; Markowitz, Richard I; Bixby, Sarah; Diperna, Stephanie; Servaes, Sabah; Hellinger, Jeffrey C; Neuman, Mark I

    2013-07-01

    The inclusion of clinical information may have unrecognized influence in the interpretation of diagnostic testing. The objective of the study was to determine the impact of clinical history on chest radiograph interpretation in the diagnosis of pneumonia. Prospective case-based study. Radiologists interpreted 110 radiographs of children evaluated for suspicion of pneumonia. Clinical information was withheld during the first interpretation. After 6 months the radiographs were reviewed with clinical information. Radiologists reported on pneumonia indicators described by the World Health Organization (ie, any infiltrate, alveolar infiltrate, interstitial infiltrate, air bronchograms, hilar adenopathy, pleural effusion). Children's Hospital of Philadelphia and Boston Children's Hospital. Six board-certified radiologists. Inter- and inter-rater reliability were assessed using the kappa statistic. The addition of clinical history did not have a substantial impact on the inter-rater reliability in the identification of any infiltrate, alveolar infiltrate, interstitial infiltrate, pleural effusion, or hilar adenopathy. Inter-rater reliability in the identification of air bronchograms improved from fair (k = 0.32) to moderate (k = 0.53). Intra-rater reliability for the identification of alveolar infiltrate remained substantial to almost perfect for all 6 raters with and without clinical information. One rater had a decrease in inter-rater reliability from almost perfect (k = 1.0) to fair (k = 0.21) in the identification of interstitial infiltrate with the addition of clinical history. Alveolar infiltrate and pleural effusion are findings with high intra- and inter-rater reliability in the diagnosis of bacterial pneumonia. The addition of clinical information did not have a substantial impact on the reliability of these findings. © 2012 Society of Hospital Medicine.

  15. Data-driven inference for the spatial scan statistic.

    PubMed

    Almeida, Alexandre C L; Duarte, Anderson R; Duczmal, Luiz H; Oliveira, Fernando L P; Takahashi, Ricardo H C

    2011-08-02

    Kulldorff's spatial scan statistic for aggregated area maps searches for clusters of cases without specifying their size (number of areas) or geographic location in advance. Their statistical significance is tested while adjusting for the multiple testing inherent in such a procedure. However, as is shown in this work, this adjustment is not done in an even manner for all possible cluster sizes. A modification is proposed to the usual inference test of the spatial scan statistic, incorporating additional information about the size of the most likely cluster found. A new interpretation of the results of the spatial scan statistic is done, posing a modified inference question: what is the probability that the null hypothesis is rejected for the original observed cases map with a most likely cluster of size k, taking into account only those most likely clusters of size k found under null hypothesis for comparison? This question is especially important when the p-value computed by the usual inference process is near the alpha significance level, regarding the correctness of the decision based in this inference. A practical procedure is provided to make more accurate inferences about the most likely cluster found by the spatial scan statistic.

  16. ORAL INTERPRETATION.

    ERIC Educational Resources Information Center

    CAMPBELL, PAUL N.

    THE BASIC PREMISE OF THIS BOOK IS THAT LEARNING TO READ ORALLY IS OF FUNDAMENTAL IMPORTANCE TO THOSE WHO WOULD FULLY APPRECIATE OR RESPOND TO LITERATURE. BECAUSE READERS MUST INTERPRET LITERATURE ALWAYS FOR THEMSELVES AND OFTEN FOR AN AUDIENCE, THREE ASPECTS OF ORAL INTERPRETATION ARE EXPLORED--(1) THE CHOICE OF MATERIALS, WHICH REQUIRES AN…

  17. Improving the Crossing-SIBTEST Statistic for Detecting Non-uniform DIF.

    PubMed

    Chalmers, R Philip

    2018-06-01

    This paper demonstrates that, after applying a simple modification to Li and Stout's (Psychometrika 61(4):647-677, 1996) CSIBTEST statistic, an improved variant of the statistic could be realized. It is shown that this modified version of CSIBTEST has a more direct association with the SIBTEST statistic presented by Shealy and Stout (Psychometrika 58(2):159-194, 1993). In particular, the asymptotic sampling distributions and general interpretation of the effect size estimates are the same for SIBTEST and the new CSIBTEST. Given the more natural connection to SIBTEST, it is shown that Li and Stout's hypothesis testing approach is insufficient for CSIBTEST; thus, an improved hypothesis testing procedure is required. Based on the presented arguments, a new chi-squared-based hypothesis testing approach is proposed for the modified CSIBTEST statistic. Positive results from a modest Monte Carlo simulation study strongly suggest the original CSIBTEST procedure and randomization hypothesis testing approach should be replaced by the modified statistic and hypothesis testing method.

  18. Risk patterns and correlated brain activities. Multidimensional statistical analysis of FMRI data in economic decision making study.

    PubMed

    van Bömmel, Alena; Song, Song; Majer, Piotr; Mohr, Peter N C; Heekeren, Hauke R; Härdle, Wolfgang K

    2014-07-01

    Decision making usually involves uncertainty and risk. Understanding which parts of the human brain are activated during decisions under risk and which neural processes underly (risky) investment decisions are important goals in neuroeconomics. Here, we analyze functional magnetic resonance imaging (fMRI) data on 17 subjects who were exposed to an investment decision task from Mohr, Biele, Krugel, Li, and Heekeren (in NeuroImage 49, 2556-2563, 2010b). We obtain a time series of three-dimensional images of the blood-oxygen-level dependent (BOLD) fMRI signals. We apply a panel version of the dynamic semiparametric factor model (DSFM) presented in Park, Mammen, Wolfgang, and Borak (in Journal of the American Statistical Association 104(485), 284-298, 2009) and identify task-related activations in space and dynamics in time. With the panel DSFM (PDSFM) we can capture the dynamic behavior of the specific brain regions common for all subjects and represent the high-dimensional time-series data in easily interpretable low-dimensional dynamic factors without large loss of variability. Further, we classify the risk attitudes of all subjects based on the estimated low-dimensional time series. Our classification analysis successfully confirms the estimated risk attitudes derived directly from subjects' decision behavior.

  19. Distributed data collection for a database of radiological image interpretations

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.

    1997-01-01

    The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.

  20. Advances in Statistical Methods for Substance Abuse Prevention Research

    PubMed Central

    MacKinnon, David P.; Lockwood, Chondra M.

    2010-01-01

    The paper describes advances in statistical methods for prevention research with a particular focus on substance abuse prevention. Standard analysis methods are extended to the typical research designs and characteristics of the data collected in prevention research. Prevention research often includes longitudinal measurement, clustering of data in units such as schools or clinics, missing data, and categorical as well as continuous outcome variables. Statistical methods to handle these features of prevention data are outlined. Developments in mediation, moderation, and implementation analysis allow for the extraction of more detailed information from a prevention study. Advancements in the interpretation of prevention research results include more widespread calculation of effect size and statistical power, the use of confidence intervals as well as hypothesis testing, detailed causal analysis of research findings, and meta-analysis. The increased availability of statistical software has contributed greatly to the use of new methods in prevention research. It is likely that the Internet will continue to stimulate the development and application of new methods. PMID:12940467

  1. Surface temperature statistics over Los Angeles - The influence of land use

    NASA Technical Reports Server (NTRS)

    Dousset, Benedicte

    1991-01-01

    Surface temperature statistics from 84 NOAA AVHRR (Advanced Very High Resolution Radiometer) satellite images of the Los Angeles basin are interpreted as functions of the corresponding urban land-cover classified from a multispectral SPOT image. Urban heat islands observed in the temperature statistics correlate well with the distribution of industrial and fully built areas. Small cool islands coincide with highly watered parks and golf courses. There is a significant negative correlation between the afternoon surface temperature and a vegetation index computed from the SPOT image.

  2. The use of higher-order statistics in rapid object categorization in natural scenes.

    PubMed

    Banno, Hayaki; Saiki, Jun

    2015-02-04

    We can rapidly and efficiently recognize many types of objects embedded in complex scenes. What information supports this object recognition is a fundamental question for understanding our visual processing. We investigated the eccentricity-dependent role of shape and statistical information for ultrarapid object categorization, using the higher-order statistics proposed by Portilla and Simoncelli (2000). Synthesized textures computed by their algorithms have the same higher-order statistics as the originals, while the global shapes were destroyed. We used the synthesized textures to manipulate the availability of shape information separately from the statistics. We hypothesized that shape makes a greater contribution to central vision than to peripheral vision and that statistics show the opposite pattern. Results did not show contributions clearly biased by eccentricity. Statistical information demonstrated a robust contribution not only in peripheral but also in central vision. For shape, the results supported the contribution in both central and peripheral vision. Further experiments revealed some interesting properties of the statistics. They are available for a limited time, attributable to the presence or absence of animals without shape, and predict how easily humans detect animals in original images. Our data suggest that when facing the time constraint of categorical processing, higher-order statistics underlie our significant performance for rapid categorization, irrespective of eccentricity. © 2015 ARVO.

  3. Neutral vs positive oral contrast in diagnosing acute appendicitis with contrast-enhanced CT: sensitivity, specificity, reader confidence and interpretation time

    PubMed Central

    Naeger, D M; Chang, S D; Kolli, P; Shah, V; Huang, W; Thoeni, R F

    2011-01-01

    Objective The study compared the sensitivity, specificity, confidence and interpretation time of readers of differing experience in diagnosing acute appendicitis with contrast-enhanced CT using neutral vs positive oral contrast agents. Methods Contrast-enhanced CT for right lower quadrant or right flank pain was performed in 200 patients with neutral and 200 with positive oral contrast including 199 with proven acute appendicitis and 201 with other diagnoses. Test set disease prevalence was 50%. Two experienced gastrointestinal radiologists, one fellow and two first-year residents blindly assessed all studies for appendicitis (2000 readings) and assigned confidence scores (1=poor to 4=excellent). Receiver operating characteristic (ROC) curves were generated. Total interpretation time was recorded. Each reader's interpretation with the two agents was compared using standard statistical methods. Results Average reader sensitivity was found to be 96% (range 91–99%) with positive and 95% (89–98%) with neutral oral contrast; specificity was 96% (92–98%) and 94% (90–97%). For each reader, no statistically significant difference was found between the two agents (sensitivities p-values >0.6; specificities p-values>0.08), in the area under the ROC curve (range 0.95–0.99) or in average interpretation times. In cases without appendicitis, positive oral contrast demonstrated improved appendix identification (average 90% vs 78%) and higher confidence scores for three readers. Average interpretation times showed no statistically significant differences between the agents. Conclusion Neutral vs positive oral contrast does not affect the accuracy of contrast-enhanced CT for diagnosing acute appendicitis. Although positive oral contrast might help to identify normal appendices, we continue to use neutral oral contrast given its other potential benefits. PMID:20959365

  4. Reporting Point and Interval Estimates of Effect-Size for Planned Contrasts: Fixed within Effect Analyses of Variance

    ERIC Educational Resources Information Center

    Robey, Randall R.

    2004-01-01

    The purpose of this tutorial is threefold: (a) review the state of statistical science regarding effect-sizes, (b) illustrate the importance of effect-sizes for interpreting findings in all forms of research and particularly for results of clinical-outcome research, and (c) demonstrate just how easily a criterion on reporting effect-sizes in…

  5. Note on the interpretation of interactions in comparative research.

    PubMed

    Stanovich, K E

    1977-01-01

    In comparative research it is often the case that attention centers around the existence of an interaction between subject population and an experimental manipulation. Several recent investigators have discussed problems in the interpretation of such interactions. The present paper concerns one particular conceptual difficulty that, although common in research of this type, has received little attention. It was pointed out that dependent variables that are only ordinally related to the construct for which they are a measure are subject to transformations that may create or eliminate statistical interactions.

  6. Statistical classifiers on multifractal parameters for optical diagnosis of cervical cancer

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sabyasachi; Pratiher, Sawon; Kumar, Rajeev; Krishnamoorthy, Vigneshram; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2017-06-01

    An augmented set of multifractal parameters with physical interpretations have been proposed to quantify the varying distribution and shape of the multifractal spectrum. The statistical classifier with accuracy of 84.17% validates the adequacy of multi-feature MFDFA characterization of elastic scattering spectroscopy for optical diagnosis of cancer.

  7. An Online Course of Business Statistics: The Proportion of Successful Students

    ERIC Educational Resources Information Center

    Pena-Sanchez, Rolando

    2009-01-01

    This article describes the students' academic progress in an online course of business statistics through interactive software assignments and diverse educational homework, which helps these students to build their own e-learning through basic competences; i.e. interpreting results and solving problems. Cross-tables were built for the categorical…

  8. Pitfalls in statistical landslide susceptibility modelling

    NASA Astrophysics Data System (ADS)

    Schröder, Boris; Vorpahl, Peter; Märker, Michael; Elsenbeer, Helmut

    2010-05-01

    The use of statistical methods is a well-established approach to predict landslide occurrence probabilities and to assess landslide susceptibility. This is achieved by applying statistical methods relating historical landslide inventories to topographic indices as predictor variables. In our contribution, we compare several new and powerful methods developed in machine learning and well-established in landscape ecology and macroecology for predicting the distribution of shallow landslides in tropical mountain rainforests in southern Ecuador (among others: boosted regression trees, multivariate adaptive regression splines, maximum entropy). Although these methods are powerful, we think it is necessary to follow a basic set of guidelines to avoid some pitfalls regarding data sampling, predictor selection, and model quality assessment, especially if a comparison of different models is contemplated. We therefore suggest to apply a novel toolbox to evaluate approaches to the statistical modelling of landslide susceptibility. Additionally, we propose some methods to open the "black box" as an inherent part of machine learning methods in order to achieve further explanatory insights into preparatory factors that control landslides. Sampling of training data should be guided by hypotheses regarding processes that lead to slope failure taking into account their respective spatial scales. This approach leads to the selection of a set of candidate predictor variables considered on adequate spatial scales. This set should be checked for multicollinearity in order to facilitate model response curve interpretation. Model quality assesses how well a model is able to reproduce independent observations of its response variable. This includes criteria to evaluate different aspects of model performance, i.e. model discrimination, model calibration, and model refinement. In order to assess a possible violation of the assumption of independency in the training samples or a possible

  9. Assessing the statistical significance of the achieved classification error of classifiers constructed using serum peptide profiles, and a prescription for random sampling repeated studies for massive high-throughput genomic and proteomic studies.

    PubMed

    Lyons-Weiler, James; Pelikan, Richard; Zeh, Herbert J; Whitcomb, David C; Malehorn, David E; Bigbee, William L; Hauskrecht, Milos

    2005-01-01

    Peptide profiles generated using SELDI/MALDI time of flight mass spectrometry provide a promising source of patient-specific information with high potential impact on the early detection and classification of cancer and other diseases. The new profiling technology comes, however, with numerous challenges and concerns. Particularly important are concerns of reproducibility of classification results and their significance. In this work we describe a computational validation framework, called PACE (Permutation-Achieved Classification Error), that lets us assess, for a given classification model, the significance of the Achieved Classification Error (ACE) on the profile data. The framework compares the performance statistic of the classifier on true data samples and checks if these are consistent with the behavior of the classifier on the same data with randomly reassigned class labels. A statistically significant ACE increases our belief that a discriminative signal was found in the data. The advantage of PACE analysis is that it can be easily combined with any classification model and is relatively easy to interpret. PACE analysis does not protect researchers against confounding in the experimental design, or other sources of systematic or random error. We use PACE analysis to assess significance of classification results we have achieved on a number of published data sets. The results show that many of these datasets indeed possess a signal that leads to a statistically significant ACE.

  10. Application of multivariable statistical techniques in plant-wide WWTP control strategies analysis.

    PubMed

    Flores, X; Comas, J; Roda, I R; Jiménez, L; Gernaey, K V

    2007-01-01

    The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation of the complex multicriteria data sets and allows an improved use of information for effective evaluation of control strategies.

  11. When Statistical Literacy Really Matters: Understanding Published Information about the HIV/AIDS Epidemic in South Africa

    ERIC Educational Resources Information Center

    Hobden, Sally

    2014-01-01

    Information on the HIV/AIDS epidemic in Southern Africa is often interpreted through a veil of secrecy and shame and, I argue, with flawed understanding of basic statistics. This research determined the levels of statistical literacy evident in 316 future Mathematical Literacy teachers' explanations of the median in the context of HIV/AIDS…

  12. Line identification studies using traditional techniques and wavelength coincidence statistics

    NASA Technical Reports Server (NTRS)

    Cowley, Charles R.; Adelman, Saul J.

    1990-01-01

    Traditional line identification techniques result in the assignment of individual lines to an atomic or ionic species. These methods may be supplemented by wavelength coincidence statistics (WCS). The strength and weakness of these methods are discussed using spectra of a number of normal and peculiar B and A stars that have been studied independently by both methods. The present results support the overall findings of some earlier studies. WCS would be most useful in a first survey, before traditional methods have been applied. WCS can quickly make a global search for all species and in this way may enable identifications of an unexpected spectrum that could easily be omitted entirely from a traditional study. This is illustrated by O I. WCS is a subject to well known weakness of any statistical technique, for example, a predictable number of spurious results are to be expected. The danger of small number statistics are illustrated. WCS is at its best relative to traditional methods in finding a line-rich atomic species that is only weakly present in a complicated stellar spectrum.

  13. Considerations When Working with Interpreters.

    ERIC Educational Resources Information Center

    Hwa-Froelich, Deborah A.; Westby, Carol E.

    2003-01-01

    This article describes the current training and certification procedures in place for linguistic interpreters, the continuum of interpreter roles, and how interpreters' perspectives may influence the interpretive interaction. The specific skills needed for interpreting in either health care or educational settings are identified. A table compares…

  14. Three-channel false colour AFM images for improved interpretation of complex surfaces: a study of filamentous cyanobacteria.

    PubMed

    Kurk, Toby; Adams, David G; Connell, Simon D; Thomson, Neil H

    2010-05-01

    Imaging signals derived from the atomic force microscope (AFM) are typically presented as separate adjacent images with greyscale or pseudo-colour palettes. We propose that information-rich false-colour composites are a useful means of presenting three-channel AFM image data. This method can aid the interpretation of complex surfaces and facilitate the perception of information that is convoluted across data channels. We illustrate this approach with images of filamentous cyanobacteria imaged in air and under aqueous buffer, using both deflection-modulation (contact) mode and amplitude-modulation (tapping) mode. Topography-dependent contrast in the error and tertiary signals aids the interpretation of the topography signal by contributing additional data, resulting in a more detailed image, and by showing variations in the probe-surface interaction. Moreover, topography-independent contrast and topography-dependent contrast in the tertiary data image (phase or friction) can be distinguished more easily as a consequence of the three dimensional colour-space.

  15. Statistical significance of trace evidence matches using independent physicochemical measurements

    NASA Astrophysics Data System (ADS)

    Almirall, Jose R.; Cole, Michael; Furton, Kenneth G.; Gettinby, George

    1997-02-01

    A statistical approach to the significance of glass evidence is proposed using independent physicochemical measurements and chemometrics. Traditional interpretation of the significance of trace evidence matches or exclusions relies on qualitative descriptors such as 'indistinguishable from,' 'consistent with,' 'similar to' etc. By performing physical and chemical measurements with are independent of one another, the significance of object exclusions or matches can be evaluated statistically. One of the problems with this approach is that the human brain is excellent at recognizing and classifying patterns and shapes but performs less well when that object is represented by a numerical list of attributes. Chemometrics can be employed to group similar objects using clustering algorithms and provide statistical significance in a quantitative manner. This approach is enhanced when population databases exist or can be created and the data in question can be evaluated given these databases. Since the selection of the variables used and their pre-processing can greatly influence the outcome, several different methods could be employed in order to obtain a more complete picture of the information contained in the data. Presently, we report on the analysis of glass samples using refractive index measurements and the quantitative analysis of the concentrations of the metals: Mg, Al, Ca, Fe, Mn, Ba, Sr, Ti and Zr. The extension of this general approach to fiber and paint comparisons also is discussed. This statistical approach should not replace the current interpretative approaches to trace evidence matches or exclusions but rather yields an additional quantitative measure. The lack of sufficient general population databases containing the needed physicochemical measurements and the potential for confusion arising from statistical analysis currently hamper this approach and ways of overcoming these obstacles are presented.

  16. Computing the Expected Cost of an Appointment Schedule for Statistically Identical Customers with Probabilistic Service Times

    PubMed Central

    Dietz, Dennis C.

    2014-01-01

    A cogent method is presented for computing the expected cost of an appointment schedule where customers are statistically identical, the service time distribution has known mean and variance, and customer no-shows occur with time-dependent probability. The approach is computationally efficient and can be easily implemented to evaluate candidate schedules within a schedule optimization algorithm. PMID:24605070

  17. Confidence Intervals for the Between-Study Variance in Random Effects Meta-Analysis Using Generalised Cochran Heterogeneity Statistics

    ERIC Educational Resources Information Center

    Jackson, Dan

    2013-01-01

    Statistical inference is problematic in the common situation in meta-analysis where the random effects model is fitted to just a handful of studies. In particular, the asymptotic theory of maximum likelihood provides a poor approximation, and Bayesian methods are sensitive to the prior specification. Hence, less efficient, but easily computed and…

  18. What dementia reveals about proverb interpretation and its neuroanatomical correlates.

    PubMed

    Kaiser, Natalie C; Lee, Grace J; Lu, Po H; Mather, Michelle J; Shapira, Jill; Jimenez, Elvira; Thompson, Paul M; Mendez, Mario F

    2013-08-01

    Neuropsychologists frequently include proverb interpretation as a measure of executive abilities. A concrete interpretation of proverbs, however, may reflect semantic impairments from anterior temporal lobes, rather than executive dysfunction from frontal lobes. The investigation of proverb interpretation among patients with different dementias with varying degrees of temporal and frontal dysfunction may clarify the underlying brain-behavior mechanisms for abstraction from proverbs. We propose that patients with behavioral variant frontotemporal dementia (bvFTD), who are characteristically more impaired on proverb interpretation than those with Alzheimer's disease (AD), are disproportionately impaired because of anterior temporal-mediated semantic deficits. Eleven patients with bvFTD and 10 with AD completed the Delis-Kaplan Executive Function System (D-KEFS) Proverbs Test and a series of neuropsychological measures of executive and semantic functions. The analysis included both raw and age-adjusted normed data for multiple choice responses on the D-KEFS Proverbs Test using independent samples t-tests. Tensor-based morphometry (TBM) applied to 3D T1-weighted MRI scans mapped the association between regional brain volume and proverb performance. Computations of mean Jacobian values within select regions of interest provided a numeric summary of regional volume, and voxel-wise regression yielded 3D statistical maps of the association between tissue volume and proverb scores. The patients with bvFTD were significantly worse than those with AD in proverb interpretation. The worse performance of the bvFTD patients involved a greater number of concrete responses to common, familiar proverbs, but not to uncommon, unfamiliar ones. These concrete responses to common proverbs correlated with semantic measures, whereas concrete responses to uncommon proverbs correlated with executive functions. After controlling for dementia diagnosis, TBM analyses indicated significant

  19. A NOVEL SCORING SYSTEM: PREDICTING SEPTIC SHOCK AT DIAGNOSIS EASILY IN ACUTE COMPLICATED PYELONEPHRITIS PATIENTS.

    PubMed

    Kubota, Masashi; Kanno, Toru; Nishiyama, Ryuichi; Okada, Takashi; Higashi, Yoshihito; Yamada, Hitoshi

    2016-01-01

    (Objectives) Because acute complicated pyelonephritis can easily cause sepsis and concomitant shock status, it is a potentially lethal disease. However, the predictors for the severity of pyelonephritis is not well analyzed. In this study, we aimed at clarifying the clinical characteristic risk factors associated with septic shock in patients with acute complicated pyelonephritis. (Materials and methods) From May 2009 to March 2014, 267 patients with acute complicated pyelonephritis were treated at our institution. We investigated the characteristics of the patients associated with septic shock, and assessed risk factors in these patients. By using these risk factors, we established a novel scoring system to predict septic shock. (Results) 267 patients included 145 patients with ureteral calculi and 75 patients with stent-related pyelonephritis. Septic shock occurred in 35 patients (13%), and the mortality rate was 0.75%. Multivariate analysis revealed that (P): Performance Status ≥3 (p=0.0014), (U): Presence of Ureteral calculi (p=0.043), (S): Sex of female (p=0.023), and (H): the presence of Hydronephrosis (p=0.039) were independent risk factors for septic shock. P.U.S.H. scoring system (range 0-4), which consists of these 4 factors, were positively correlated with the rate of septic shock (score 0: 0%, 1: 5.3%, 2: 3.4%, 3: 25.0%, 4: 42.3%). Importantly, patients with 3-4 P.U.S.H. scores were statistically more likely to become septic shock than those with 0-2 score (p=0.00014). (Conclusions) These results suggest that P.U.S.H. scoring system using 4 clinical factors is useful to predict the status of septic shock in patients with acute complicated pyelonephritis.

  20. Statistical Compression of Wind Speed Data

    NASA Astrophysics Data System (ADS)

    Tagle, F.; Castruccio, S.; Crippa, P.; Genton, M.

    2017-12-01

    In this work we introduce a lossy compression approach that utilizes a stochastic wind generator based on a non-Gaussian distribution to reproduce the internal climate variability of daily wind speed as represented by the CESM Large Ensemble over Saudi Arabia. Stochastic wind generators, and stochastic weather generators more generally, are statistical models that aim to match certain statistical properties of the data on which they are trained. They have been used extensively in applications ranging from agricultural models to climate impact studies. In this novel context, the parameters of the fitted model can be interpreted as encoding the information contained in the original uncompressed data. The statistical model is fit to only 3 of the 30 ensemble members and it adequately captures the variability of the ensemble in terms of seasonal internannual variability of daily wind speed. To deal with such a large spatial domain, it is partitioned into 9 region, and the model is fit independently to each of these. We further discuss a recent refinement of the model, which relaxes this assumption of regional independence, by introducing a large-scale component that interacts with the fine-scale regional effects.

  1. Innovations in curriculum design: A multi-disciplinary approach to teaching statistics to undergraduate medical students

    PubMed Central

    Freeman, Jenny V; Collier, Steve; Staniforth, David; Smith, Kevin J

    2008-01-01

    Background Statistics is relevant to students and practitioners in medicine and health sciences and is increasingly taught as part of the medical curriculum. However, it is common for students to dislike and under-perform in statistics. We sought to address these issues by redesigning the way that statistics is taught. Methods The project brought together a statistician, clinician and educational experts to re-conceptualize the syllabus, and focused on developing different methods of delivery. New teaching materials, including videos, animations and contextualized workbooks were designed and produced, placing greater emphasis on applying statistics and interpreting data. Results Two cohorts of students were evaluated, one with old style and one with new style teaching. Both were similar with respect to age, gender and previous level of statistics. Students who were taught using the new approach could better define the key concepts of p-value and confidence interval (p < 0.001 for both). They were more likely to regard statistics as integral to medical practice (p = 0.03), and to expect to use it in their medical career (p = 0.003). There was no significant difference in the numbers who thought that statistics was essential to understand the literature (p = 0.28) and those who felt comfortable with the basics of statistics (p = 0.06). More than half the students in both cohorts felt that they were comfortable with the basics of medical statistics. Conclusion Using a variety of media, and placing emphasis on interpretation can help make teaching, learning and understanding of statistics more people-centred and relevant, resulting in better outcomes for students. PMID:18452599

  2. Avalanche Statistics Identify Intrinsic Stellar Processes near Criticality in KIC 8462852

    NASA Astrophysics Data System (ADS)

    Sheikh, Mohammed A.; Weaver, Richard L.; Dahmen, Karin A.

    2016-12-01

    The star KIC8462852 (Tabby's star) has shown anomalous drops in light flux. We perform a statistical analysis of the more numerous smaller dimming events by using methods found useful for avalanches in ferromagnetism and plastic flow. Scaling exponents for avalanche statistics and temporal profiles of the flux during the dimming events are close to mean field predictions. Scaling collapses suggest that this star may be near a nonequilibrium critical point. The large events are interpreted as avalanches marked by modified dynamics, limited by the system size, and not within the scaling regime.

  3. Individuality in harpsichord performance: disentangling performer- and piece-specific influences on interpretive choices

    PubMed Central

    Gingras, Bruno; Asselin, Pierre-Yves; McAdams, Stephen

    2013-01-01

    Although a growing body of research has examined issues related to individuality in music performance, few studies have attempted to quantify markers of individuality that transcend pieces and musical styles. This study aims to identify such meta-markers by discriminating between influences linked to specific pieces or interpretive goals and performer-specific playing styles, using two complementary statistical approaches: linear mixed models (LMMs) to estimate fixed (piece and interpretation) and random (performer) effects, and similarity analyses to compare expressive profiles on a note-by-note basis across pieces and expressive parameters. Twelve professional harpsichordists recorded three pieces representative of the Baroque harpsichord repertoire, including three interpretations of one of these pieces, each emphasizing a different melodic line, on an instrument equipped with a MIDI console. Four expressive parameters were analyzed: articulation, note onset asynchrony, timing, and velocity. LMMs showed that piece-specific influences were much larger for articulation than for other parameters, for which performer-specific effects were predominant, and that piece-specific influences were generally larger than effects associated with interpretive goals. Some performers consistently deviated from the mean values for articulation and velocity across pieces and interpretations, suggesting that global measures of expressivity may in some cases constitute valid markers of artistic individuality. Similarity analyses detected significant associations among the magnitudes of the correlations between the expressive profiles of different performers. These associations were found both when comparing across parameters and within the same piece or interpretation, or on the same parameter and across pieces or interpretations. These findings suggest the existence of expressive meta-strategies that can manifest themselves across pieces, interpretive goals, or expressive devices

  4. Targeting Lexicon in Interpreting.

    ERIC Educational Resources Information Center

    Farghal, Mohammed; Shakir, Abdullah

    1994-01-01

    Studies student interpreters in the Master's Translation Program at Yarmouk University in Jordan. Analyzes the difficulties of these students, particularly regarding lexical competence, when interpreting from Arabic to English, emphasizing the need to teach lexicon all through interpreting programs. (HB)

  5. Statistical Data Editing in Scientific Articles.

    PubMed

    Habibzadeh, Farrokh

    2017-07-01

    Scientific journals are important scholarly forums for sharing research findings. Editors have important roles in safeguarding standards of scientific publication and should be familiar with correct presentation of results, among other core competencies. Editors do not have access to the raw data and should thus rely on clues in the submitted manuscripts. To identify probable errors, they should look for inconsistencies in presented results. Common statistical problems that can be picked up by a knowledgeable manuscript editor are discussed in this article. Manuscripts should contain a detailed section on statistical analyses of the data. Numbers should be reported with appropriate precisions. Standard error of the mean (SEM) should not be reported as an index of data dispersion. Mean (standard deviation [SD]) and median (interquartile range [IQR]) should be used for description of normally and non-normally distributed data, respectively. If possible, it is better to report 95% confidence interval (CI) for statistics, at least for main outcome variables. And, P values should be presented, and interpreted with caution, if there is a hypothesis. To advance knowledge and skills of their members, associations of journal editors are better to develop training courses on basic statistics and research methodology for non-experts. This would in turn improve research reporting and safeguard the body of scientific evidence. © 2017 The Korean Academy of Medical Sciences.

  6. Statistical speed of quantum states: Generalized quantum Fisher information and Schatten speed

    NASA Astrophysics Data System (ADS)

    Gessner, Manuel; Smerzi, Augusto

    2018-02-01

    We analyze families of measures for the quantum statistical speed which include as special cases the quantum Fisher information, the trace speed, i.e., the quantum statistical speed obtained from the trace distance, and more general quantifiers obtained from the family of Schatten norms. These measures quantify the statistical speed under generic quantum evolutions and are obtained by maximizing classical measures over all possible quantum measurements. We discuss general properties, optimal measurements, and upper bounds on the speed of separable states. We further provide a physical interpretation for the trace speed by linking it to an analog of the quantum Cramér-Rao bound for median-unbiased quantum phase estimation.

  7. Directionality Effects in Simultaneous Language Interpreting: The Case of Sign Language Interpreters in the Netherlands

    ERIC Educational Resources Information Center

    van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan

    2011-01-01

    The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of the Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives…

  8. A scan statistic to extract causal gene clusters from case-control genome-wide rare CNV data.

    PubMed

    Nishiyama, Takeshi; Takahashi, Kunihiko; Tango, Toshiro; Pinto, Dalila; Scherer, Stephen W; Takami, Satoshi; Kishino, Hirohisa

    2011-05-26

    Several statistical tests have been developed for analyzing genome-wide association data by incorporating gene pathway information in terms of gene sets. Using these methods, hundreds of gene sets are typically tested, and the tested gene sets often overlap. This overlapping greatly increases the probability of generating false positives, and the results obtained are difficult to interpret, particularly when many gene sets show statistical significance. We propose a flexible statistical framework to circumvent these problems. Inspired by spatial scan statistics for detecting clustering of disease occurrence in the field of epidemiology, we developed a scan statistic to extract disease-associated gene clusters from a whole gene pathway. Extracting one or a few significant gene clusters from a global pathway limits the overall false positive probability, which results in increased statistical power, and facilitates the interpretation of test results. In the present study, we applied our method to genome-wide association data for rare copy-number variations, which have been strongly implicated in common diseases. Application of our method to a simulated dataset demonstrated the high accuracy of this method in detecting disease-associated gene clusters in a whole gene pathway. The scan statistic approach proposed here shows a high level of accuracy in detecting gene clusters in a whole gene pathway. This study has provided a sound statistical framework for analyzing genome-wide rare CNV data by incorporating topological information on the gene pathway.

  9. The wetland continuum: a conceptual framework for interpreting biological studies

    USGS Publications Warehouse

    Euliss, N.H.; LaBaugh, J.W.; Fredrickson, L.H.; Mushet, D.M.; Swanson, G.A.; Winter, T.C.; Rosenberry, D.O.; Nelson, R.D.

    2004-01-01

    We describe a conceptual model, the wetland continuum, which allows wetland managers, scientists, and ecologists to consider simultaneously the influence of climate and hydrologic setting on wetland biological communities. Although multidimensional, the wetland continuum is most easily represented as a two-dimensional gradient, with ground water and atmospheric water constituting the horizontal and vertical axis, respectively. By locating the position of a wetland on both axes of the continuum, the potential biological expression of the wetland can be predicted at any point in time. The model provides a framework useful in the organization and interpretation of biological data from wetlands by incorporating the dynamic changes these systems undergo as a result of normal climatic variation rather than placing them into static categories common to many wetland classification systems. While we developed this model from the literature available for depressional wetlands in the prairie pothole region of North America, we believe the concept has application to wetlands in many other geographic locations.

  10. Interpreting Sky-Averaged 21-cm Measurements

    NASA Astrophysics Data System (ADS)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  11. Interpreting comprehensive two-dimensional gas chromatography using peak topography maps with application to petroleum forensics.

    PubMed

    Ghasemi Damavandi, Hamidreza; Sen Gupta, Ananya; Nelson, Robert K; Reddy, Christopher M

    2016-01-01

    Comprehensive two-dimensional gas chromatography [Formula: see text] provides high-resolution separations across hundreds of compounds in a complex mixture, thus unlocking unprecedented information for intricate quantitative interpretation. We exploit this compound diversity across the [Formula: see text] topography to provide quantitative compound-cognizant interpretation beyond target compound analysis with petroleum forensics as a practical application. We focus on the [Formula: see text] topography of biomarker hydrocarbons, hopanes and steranes, as they are generally recalcitrant to weathering. We introduce peak topography maps (PTM) and topography partitioning techniques that consider a notably broader and more diverse range of target and non-target biomarker compounds compared to traditional approaches that consider approximately 20 biomarker ratios. Specifically, we consider a range of 33-154 target and non-target biomarkers with highest-to-lowest peak ratio within an injection ranging from 4.86 to 19.6 (precise numbers depend on biomarker diversity of individual injections). We also provide a robust quantitative measure for directly determining "match" between samples, without necessitating training data sets. We validate our methods across 34 [Formula: see text] injections from a diverse portfolio of petroleum sources, and provide quantitative comparison of performance against established statistical methods such as principal components analysis (PCA). Our data set includes a wide range of samples collected following the 2010 Deepwater Horizon disaster that released approximately 160 million gallons of crude oil from the Macondo well (MW). Samples that were clearly collected following this disaster exhibit statistically significant match [Formula: see text] using PTM-based interpretation against other closely related sources. PTM-based interpretation also provides higher differentiation between closely correlated but distinct sources than obtained using

  12. Interpretations of the patient-therapist relationship in brief dynamic psychotherapy : effects on long-term mode-specific changes.

    PubMed

    Amlo, S; Engelstad, V; Fossum, A; Sørlie, T; Høglend, P; Heyerdahl, O; Sørbye, O

    1993-01-01

    The authors examined whether persistent analysis of the patient-therapist relationship in brief dynamic psychotherapy favorably affects long-term dynamic change in patients initially deemed suitable for such treatment. As in common practice, 22 highly suitable patients were given a high number of transference interpretations per session. A comparison group of 21 patients with lower suitability received the same treatment, but transference interpretations were withheld. Statistical adjustment for the deliberate nonequivalence in pretreatment suitability indicated a significant negative effect of high numbers of transference interpretations on long-term dynamic changes. Demographic variables, DSM-III diagnoses, additional treatment, life events in the follow-up years, or therapist effects did not explain or obscure the findings.

  13. Interpretation of coagulation test results using a web-based reporting system.

    PubMed

    Quesada, Andres E; Jabcuga, Christine E; Nguyen, Alex; Wahed, Amer; Nedelcu, Elena; Nguyen, Andy N D

    2014-01-01

    Web-based synoptic reporting has been successfully integrated into diverse fields of pathology, improving efficiency and reducing typographic errors. Coagulation is a challenging field for practicing pathologists and pathologists-in-training alike. To develop a Web-based program that can expedite the generation of a individualized interpretive report for a variety of coagulation tests. We developed a Web-based synoptic reporting system composed of 119 coagulation report templates and 38 thromboelastography (TEG) report templates covering a wide range of findings. Our institution implemented this reporting system in July 2011; it is currently used by pathology residents and attending pathologists. Feedback from the users of these reports have been overwhelmingly positive. Surveys note the time saved and reduced errors. Our easily accessible, user-friendly, Web-based synoptic reporting system for coagulation is a valuable asset to our laboratory services. Copyright© by the American Society for Clinical Pathology (ASCP).

  14. Statistically Characterizing Intra- and Inter-Individual Variability in Children with Developmental Coordination Disorder

    ERIC Educational Resources Information Center

    King, Bradley R.; Harring, Jeffrey R.; Oliveira, Marcio A.; Clark, Jane E.

    2011-01-01

    Previous research investigating children with Developmental Coordination Disorder (DCD) has consistently reported increased intra- and inter-individual variability during motor skill performance. Statistically characterizing this variability is not only critical for the analysis and interpretation of behavioral data, but also may facilitate our…

  15. Interdisciplinary application and interpretation of EREP data within the Susquehanna River Basin

    NASA Technical Reports Server (NTRS)

    Mcmurtry, G. J.; Petersen, G. W. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. It has become that lineaments seen on Skylab and ERTS images are not equally well defined, and that the clarity of definition of a particular lineament is recorded somewhat differently by different interpreters. In an effort to determine the extent of these variations, a semi-quantitative classification scheme was devised. In the field, along the crest of Bald Eagle Mountain in central Pennsylvania, statistical techniques borrowed from sedimentary petrography (point counting) were used to determine the existence and location of intensely fractured float rock. Verification of Skylab and ERTS detected lineaments on aerial photography at different scales indicated that the brecciated zones appear to occur at one margin of the 1 km zone of brecciation defined as a lineament. In the Lock Haven area, comparison of the film types from the SL4 S190A sensor revealed the black and white Pan X photography to be superior in quality for general interpretation to the black and white IR film. Also, the color positive film is better for interpretation than the color IR film.

  16. Experimental taphonomy of giant sulphur bacteria: implications for the interpretation of the embryo-like Ediacaran Doushantuo fossils.

    PubMed

    Cunningham, J A; Thomas, C-W; Bengtson, S; Marone, F; Stampanoni, M; Turner, F R; Bailey, J V; Raff, R A; Raff, E C; Donoghue, P C J

    2012-05-07

    The Ediacaran Doushantuo biota has yielded fossils interpreted as eukaryotic organisms, either animal embryos or eukaryotes basal or distantly related to Metazoa. However, the fossils have been interpreted alternatively as giant sulphur bacteria similar to the extant Thiomargarita. To test this hypothesis, living and decayed Thiomargarita were compared with Doushantuo fossils and experimental taphonomic pathways were compared with modern embryos. In the fossils, as in eukaryotic cells, subcellular structures are distributed throughout cell volume; in Thiomargarita, a central vacuole encompasses approximately 98 per cent cell volume. Key features of the fossils, including putative lipid vesicles and nuclei, complex envelope ornament, and ornate outer vesicles are incompatible with living and decay morphologies observed in Thiomargarita. Microbial taphonomy of Thiomargarita also differed from that of embryos. Embryo tissues can be consumed and replaced by bacteria, forming a replica composed of a three-dimensional biofilm, a stable fabric for potential fossilization. Vacuolated Thiomargarita cells collapse easily and do not provide an internal substrate for bacteria. The findings do not support the hypothesis that giant sulphur bacteria are an appropriate interpretative model for the embryo-like Doushantuo fossils. However, sulphur bacteria may have mediated fossil mineralization and may provide a potential bacterial analogue for other macroscopic Precambrian remains.

  17. Experimental taphonomy of giant sulphur bacteria: implications for the interpretation of the embryo-like Ediacaran Doushantuo fossils

    PubMed Central

    Cunningham, J. A.; Thomas, C.-W.; Bengtson, S.; Marone, F.; Stampanoni, M.; Turner, F. R.; Bailey, J. V.; Raff, R. A.; Raff, E. C.; Donoghue, P. C. J.

    2012-01-01

    The Ediacaran Doushantuo biota has yielded fossils interpreted as eukaryotic organisms, either animal embryos or eukaryotes basal or distantly related to Metazoa. However, the fossils have been interpreted alternatively as giant sulphur bacteria similar to the extant Thiomargarita. To test this hypothesis, living and decayed Thiomargarita were compared with Doushantuo fossils and experimental taphonomic pathways were compared with modern embryos. In the fossils, as in eukaryotic cells, subcellular structures are distributed throughout cell volume; in Thiomargarita, a central vacuole encompasses approximately 98 per cent cell volume. Key features of the fossils, including putative lipid vesicles and nuclei, complex envelope ornament, and ornate outer vesicles are incompatible with living and decay morphologies observed in Thiomargarita. Microbial taphonomy of Thiomargarita also differed from that of embryos. Embryo tissues can be consumed and replaced by bacteria, forming a replica composed of a three-dimensional biofilm, a stable fabric for potential fossilization. Vacuolated Thiomargarita cells collapse easily and do not provide an internal substrate for bacteria. The findings do not support the hypothesis that giant sulphur bacteria are an appropriate interpretative model for the embryo-like Doushantuo fossils. However, sulphur bacteria may have mediated fossil mineralization and may provide a potential bacterial analogue for other macroscopic Precambrian remains. PMID:22158954

  18. How to Make Nothing Out of Something: Analyses of the Impact of Study Sampling and Statistical Interpretation in Misleading Meta-Analytic Conclusions

    PubMed Central

    Cunningham, Michael R.; Baumeister, Roy F.

    2016-01-01

    The limited resource model states that self-control is governed by a relatively finite set of inner resources on which people draw when exerting willpower. Once self-control resources have been used up or depleted, they are less available for other self-control tasks, leading to a decrement in subsequent self-control success. The depletion effect has been studied for over 20 years, tested or extended in more than 600 studies, and supported in an independent meta-analysis (Hagger et al., 2010). Meta-analyses are supposed to reduce bias in literature reviews. Carter et al.’s (2015) meta-analysis, by contrast, included a series of questionable decisions involving sampling, methods, and data analysis. We provide quantitative analyses of key sampling issues: exclusion of many of the best depletion studies based on idiosyncratic criteria and the emphasis on mini meta-analyses with low statistical power as opposed to the overall depletion effect. We discuss two key methodological issues: failure to code for research quality, and the quantitative impact of weak studies by novice researchers. We discuss two key data analysis issues: questionable interpretation of the results of trim and fill and Funnel Plot Asymmetry test procedures, and the use and misinterpretation of the untested Precision Effect Test and Precision Effect Estimate with Standard Error (PEESE) procedures. Despite these serious problems, the Carter et al. (2015) meta-analysis results actually indicate that there is a real depletion effect – contrary to their title. PMID:27826272

  19. Antecedents of obesity - analysis, interpretation, and use of longitudinal data.

    PubMed

    Gillman, Matthew W; Kleinman, Ken

    2007-07-01

    The obesity epidemic causes misery and death. Most epidemiologists accept the hypothesis that characteristics of the early stages of human development have lifelong influences on obesity-related health outcomes. Unfortunately, there is a dearth of data of sufficient scope and individual history to help unravel the associations of prenatal, postnatal, and childhood factors with adult obesity and health outcomes. Here the authors discuss analytic methods, the interpretation of models, and the use to which such rare and valuable data may be put in developing interventions to combat the epidemic. For example, analytic methods such as quantile and multinomial logistic regression can describe the effects on body mass index range rather than just its mean; structural equation models may allow comparison of the contributions of different factors at different periods in the life course. Interpretation of the data and model construction is complex, and it requires careful consideration of the biologic plausibility and statistical interpretation of putative causal factors. The goals of discovering modifiable determinants of obesity during the prenatal, postnatal, and childhood periods must be kept in sight, and analyses should be built to facilitate them. Ultimately, interventions in these factors may help prevent obesity-related adverse health outcomes for future generations.

  20. Serotonin syndrome: a complex but easily avoidable condition.

    PubMed

    Dvir, Yael; Smallwood, Patrick

    2008-01-01

    Serotonin syndrome is a potentially life-threatening adverse drug reaction caused by excessive serotonergic agonism in central and peripheral nervous system serotonergic receptors (Boyer EW, Shannon M. The serotonin syndrome. N Engl J Med 2005;352:1112-1120). Symptoms are characterized by a triad of neuron-excitatory features, which include (a) neuromuscular hyperactivity -- tremor, clonus, myoclonus, hyperreflexia and, in advanced stages, pyramidal rigidity; (b) autonomic hyperactivity -- diaphoresis, fever, tachycardia and tachypnea; (c) altered mental status -- agitation, excitement and, in advanced stages, confusion (Gillman PK. Monoamine oxidase inhibitors, opioid analgesics and serotonin toxicity. Br J Anaesth 2005;95:434-441). It arises when pharmacological agents increase serotonin neurotransmission at postsynaptic 5-hydroxytryptamine 1A and 5-hydroxytryptamine 2A receptors through increased serotonin synthesis, decreased serotonin metabolism, increased serotonin release, inhibition of serotonin reuptake or direct agonism of the serotonin receptors (Houlihan D. Serotonin syndrome resulting from coadministration of tramodol, venlafaxine, and mirtazapine. Ann Pharmacother 2004;38:411-413). The etiology is often the result of therapeutic drug use, intentional overdosing of serotonergic agents or complex interactions between drugs that directly or indirectly modulate the serotonin system (Boyer EW, Shannon M. The serotonin syndrome. N Engl J Med 2005;352:1112-1120). Due to the increasing availability of agents with serotonergic activity, physicians need to more aware of serotonin syndrome. The following case highlights the complex nature in which serotonin syndrome can arise, as well as the proper recognition and treatment of a potentially life-threatening yet easily avoidable condition.

  1. Intraradicular Appearances Affect Radiographic Interpretation of the Periapical Area.

    PubMed

    Biscontine, Ana C; Diliberto, Adam J; Hatton, John F; Woodmansey, Karl F

    2017-12-01

    No research exists evaluating the influences of specific variables such as obturation length, radiodensity, or the presence of voids on interpretation of periradicular area. The purpose of this study was to evaluate the effects of obturation length, radiodensity, and the presence of voids on the radiographic interpretations of periapical areas. In a Web-based survey, 3 test image groups of variable obturation lengths, radiodensities, and numbers of voids were presented to observers for evaluation of the periapical areas. Intracanal areas of the images were altered by using Adobe Photoshop to create 3 test image groups. Each observer reviewed 2 control images and 1 image from each test image group. Responses were recorded in a 5-point Likert-type scale. Within each test image group, the periapical areas were identical. Kruskal-Wallis, Mann-Whitney U, and Cliff's delta statistical tests were used to analyze results. A total of 748 observer responses were analyzed. Significant differences (P ≤ .01) in the median Likert-type scale responses were identified between the following paired groups: 3 mm short and 1 mm short, 3 mm short and flush, lower radiodensity and higher radiodensity, lower radiodensity and intermediate radiodensity, no voids and several voids, and several voids and single void. Effect sizes ranged from 0.19 to 0.41. Significant differences were noted within all 3 test image groups: length, radiodensity, and presence of voids. Length of obturation had the largest effect on interpretation of the periapical area, with the 3 mm short radiographic obturation length image interpreted less favorably. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  2. Theory Interpretations in PVS

    NASA Technical Reports Server (NTRS)

    Owre, Sam; Shankar, Natarajan; Butler, Ricky W. (Technical Monitor)

    2001-01-01

    The purpose of this task was to provide a mechanism for theory interpretations in a prototype verification system (PVS) so that it is possible to demonstrate the consistency of a theory by exhibiting an interpretation that validates the axioms. The mechanization makes it possible to show that one collection of theories is correctly interpreted by another collection of theories under a user-specified interpretation for the uninterpreted types and constants. A theory instance is generated and imported, while the axiom instances are generated as proof obligations to ensure that the interpretation is valid. Interpretations can be used to show that an implementation is a correct refinement of a specification, that an axiomatically defined specification is consistent, or that a axiomatically defined specification captures its intended models. In addition, the theory parameter mechanism has been extended with a notion of theory as parameter so that a theory instance can be given as an actual parameter to an imported theory. Theory interpretations can thus be used to refine an abstract specification or to demonstrate the consistency of an axiomatic theory. In this report we describe the mechanism in detail. This extension is a part of PVS version 3.0, which will be publicly released in mid-2001.

  3. The study on development of easily chewable and swallowable foods for elderly

    PubMed Central

    Kim, Soojeong

    2015-01-01

    BACKGROUND/OBJECTS When the functions involved in the ingestion of food occurs failure, not only loss of enjoyment of eating, it will be faced with protein-energy malnutrition. Dysmasesis and difficulty of swallowing occurs in various diseases, but it may be a major cause of aging, and elderly people with authoring and dysmasesis and difficulty of swallowing in the aging society is expected to increase rapidly. SUBJECTS/METHODS In this study, we carried out a survey targeting nutritionists who work in elderly care facilities, and examined characteristics of offering of foods for elderly and the degree of demand of development of easily chewable and swallowable foods for the elderly who can crush foods and take that by their own tongues, and sometimes have difficulty in drinking water and tea. RESULTS In elderly care facilities, it was found to provide a finely chopped food or ground food that was ground with water in a blender for elderly with dysmasesis. Elderly satisfaction of provided foods is appeared overall low. Results of investigating the applicability of foods for elderly and the reflection will of menus, were showed the highest response rate in a gelification method in molecular gastronomic science technics, and results of investigating the frequent food of the elderly; representative menu of beef, pork, white fish, anchovies and spinach, were showed Korean barbecue beef, hot pepper paste stir fried pork, pan fried white fish, stir fried anchovy, seasoned spinach were the highest offer frequency. CONCLUSIONS This study will provide the fundamentals of the development of easily chewable and swallowable foods, gelification, for the elderly. The study will also illustrate that, in the elderly, food undergone gelification will reduce the risk of swallowing down to the wrong pipe and improve overall food preference. PMID:26244082

  4. The study on development of easily chewable and swallowable foods for elderly.

    PubMed

    Kim, Soojeong; Joo, Nami

    2015-08-01

    When the functions involved in the ingestion of food occurs failure, not only loss of enjoyment of eating, it will be faced with protein-energy malnutrition. Dysmasesis and difficulty of swallowing occurs in various diseases, but it may be a major cause of aging, and elderly people with authoring and dysmasesis and difficulty of swallowing in the aging society is expected to increase rapidly. In this study, we carried out a survey targeting nutritionists who work in elderly care facilities, and examined characteristics of offering of foods for elderly and the degree of demand of development of easily chewable and swallowable foods for the elderly who can crush foods and take that by their own tongues, and sometimes have difficulty in drinking water and tea. In elderly care facilities, it was found to provide a finely chopped food or ground food that was ground with water in a blender for elderly with dysmasesis. Elderly satisfaction of provided foods is appeared overall low. Results of investigating the applicability of foods for elderly and the reflection will of menus, were showed the highest response rate in a gelification method in molecular gastronomic science technics, and results of investigating the frequent food of the elderly; representative menu of beef, pork, white fish, anchovies and spinach, were showed Korean barbecue beef, hot pepper paste stir fried pork, pan fried white fish, stir fried anchovy, seasoned spinach were the highest offer frequency. This study will provide the fundamentals of the development of easily chewable and swallowable foods, gelification, for the elderly. The study will also illustrate that, in the elderly, food undergone gelification will reduce the risk of swallowing down to the wrong pipe and improve overall food preference.

  5. The kappa statistic in rehabilitation research: an examination.

    PubMed

    Tooth, Leigh R; Ottenbacher, Kenneth J

    2004-08-01

    The number and sophistication of statistical procedures reported in medical rehabilitation research is increasing. Application of the principles and methods associated with evidence-based practice has contributed to the need for rehabilitation practitioners to understand quantitative methods in published articles. Outcomes measurement and determination of reliability are areas that have experienced rapid change during the past decade. In this study, distinctions between reliability and agreement are examined. Information is presented on analytical approaches for addressing reliability and agreement with the focus on the application of the kappa statistic. The following assumptions are discussed: (1) kappa should be used with data measured on a categorical scale, (2) the patients or objects categorized should be independent, and (3) the observers or raters must make their measurement decisions and judgments independently. Several issues related to using kappa in measurement studies are described, including use of weighted kappa, methods of reporting kappa, the effect of bias and prevalence on kappa, and sample size and power requirements for kappa. The kappa statistic is useful for assessing agreement among raters, and it is being used more frequently in rehabilitation research. Correct interpretation of the kappa statistic depends on meeting the required assumptions and accurate reporting.

  6. Inverse statistical estimation via order statistics: a resolution of the ill-posed inverse problem of PERT scheduling

    NASA Astrophysics Data System (ADS)

    Pickard, William F.

    2004-10-01

    The classical PERT inverse statistics problem requires estimation of the mean, \\skew1\\bar{m} , and standard deviation, s, of a unimodal distribution given estimates of its mode, m, and of the smallest, a, and largest, b, values likely to be encountered. After placing the problem in historical perspective and showing that it is ill-posed because it is underdetermined, this paper offers an approach to resolve the ill-posedness: (a) by interpreting a and b modes of order statistic distributions; (b) by requiring also an estimate of the number of samples, N, considered in estimating the set {m, a, b}; and (c) by maximizing a suitable likelihood, having made the traditional assumption that the underlying distribution is beta. Exact formulae relating the four parameters of the beta distribution to {m, a, b, N} and the assumed likelihood function are then used to compute the four underlying parameters of the beta distribution; and from them, \\skew1\\bar{m} and s are computed using exact formulae.

  7. Neuroimaging with functional near infrared spectroscopy: From formation to interpretation

    NASA Astrophysics Data System (ADS)

    Herrera-Vega, Javier; Treviño-Palacios, Carlos G.; Orihuela-Espina, Felipe

    2017-09-01

    Functional Near Infrared Spectroscopy (fNIRS) is gaining momentum as a functional neuroimaging modality to investigate the cerebral hemodynamics subsequent to neural metabolism. As other neuroimaging modalities, it is neuroscience's tool to understand brain systems functions at behaviour and cognitive levels. To extract useful knowledge from functional neuroimages it is critical to understand the series of transformations applied during the process of the information retrieval and how they bound the interpretation. This process starts with the irradiation of the head tissues with infrared light to obtain the raw neuroimage and proceeds with computational and statistical analysis revealing hidden associations between pixels intensities and neural activity encoded to end up with the explanation of some particular aspect regarding brain function.To comprehend the overall process involved in fNIRS there is extensive literature addressing each individual step separately. This paper overviews the complete transformation sequence through image formation, reconstruction and analysis to provide an insight of the final functional interpretation.

  8. Interrater reliability: the kappa statistic.

    PubMed

    McHugh, Mary L

    2012-01-01

    The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen's kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from -1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen's suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.

  9. Solar-assisted photodegradation of isoproturon over easily recoverable titania catalysts.

    PubMed

    Tolosana-Moranchel, A; Carbajo, J; Faraldos, M; Bahamonde, A

    2017-03-01

    An easily recoverable homemade TiO 2 catalyst (GICA-1) has been evaluated during the overall photodegradation process, understood as photocatalytic efficiency and catalyst recovery step, in the solar light-assisted photodegradation of isoproturon and its reuse in two consecutive cycles. The global feasibility has been compared to the commercial TiO 2 P25. The homemade GICA-1 catalyst presented better sedimentation efficiency than TiO 2 P25 at all studied pHs, which could be explained by its higher average hydrodynamic particle size (3 μm) and other physicochemical surface properties. The evaluation of the overall process (isoproturon photo-oxidation + catalyst recovery) revealed GICA-1 homemade titania catalyst strengths: total removal of isoproturon in less than 60 min, easy recovery by sedimentation, and reusability in two consecutive cycles, without any loss of photocatalytic efficiency. Therefore, considering the whole photocatalytic cycle (good performance in photodegradation plus catalyst recovery step), the homemade GICA-1 photocatalyst resulted in more affordability than commercial TiO 2 P25. Graphical abstract.

  10. Statistics on gene-based laser speckles with a small number of scatterers: implications for the detection of polymorphism in the Chlamydia trachomatis omp1 gene

    NASA Astrophysics Data System (ADS)

    Ulyanov, Sergey S.; Ulianova, Onega V.; Zaytsev, Sergey S.; Saltykov, Yury V.; Feodorova, Valentina A.

    2018-04-01

    The transformation mechanism for a nucleotide sequence of the Chlamydia trachomatis gene into a speckle pattern has been considered. The first and second-order statistics of gene-based speckles have been analyzed. It has been demonstrated that gene-based speckles do not obey Gaussian statistics and belong to the class of speckles with a small number of scatterers. It has been shown that gene polymorphism can be easily detected through analysis of the statistical characteristics of gene-based speckles.

  11. Statistical interpretation of transient current power-law decay in colloidal quantum dot arrays

    NASA Astrophysics Data System (ADS)

    Sibatov, R. T.

    2011-08-01

    A new statistical model of the charge transport in colloidal quantum dot arrays is proposed. It takes into account Coulomb blockade forbidding multiple occupancy of nanocrystals and the influence of energetic disorder of interdot space. The model explains power-law current transients and the presence of the memory effect. The fractional differential analogue of the Ohm law is found phenomenologically for nanocrystal arrays. The model combines ideas that were considered as conflicting by other authors: the Scher-Montroll idea about the power-law distribution of waiting times in localized states for disordered semiconductors is applied taking into account Coulomb blockade; Novikov's condition about the asymptotic power-law distribution of time intervals between successful current pulses in conduction channels is fulfilled; and the carrier injection blocking predicted by Ginger and Greenham (2000 J. Appl. Phys. 87 1361) takes place.

  12. Agreement in electrocardiogram interpretation in patients with septic shock.

    PubMed

    Mehta, Sangeeta; Granton, John; Lapinsky, Stephen E; Newton, Gary; Bandayrel, Kristofer; Little, Anjuli; Siau, Chuin; Cook, Deborah J; Ayers, Dieter; Singer, Joel; Lee, Terry C; Walley, Keith R; Storms, Michelle; Cooper, Jamie; Holmes, Cheryl L; Hebert, Paul; Gordon, Anthony C; Presneill, Jeff; Russell, James A

    2011-09-01

    The reliability of electrocardiogram interpretation to diagnose myocardial ischemia in critically ill patients is unclear. In adults with septic shock, we assessed intra- and inter-rater agreement of electrocardiogram interpretation, and the effect of knowledge of troponin values on these interpretations. Prospective substudy of a randomized trial of vasopressin vs. norepinephrine in septic shock. Nine Canadian intensive care units. Adults with septic shock requiring at least 5 μg/min of norepinephrine for 6 hrs. Twelve-lead electrocardiograms were recorded before study drug, and 6 hrs, 2 days, and 4 days after study drug initiation. Two physician readers, blinded to patient data and group, independently interpreted electrocardiograms on three occasions (first two readings were blinded to patient data; third reading was unblinded to troponin). To calibrate and refine definitions, both readers initially reviewed 25 trial electrocardiograms representing normal to abnormal. Cohen's Kappa and the φ statistic were used to analyze intra- and inter-rater agreement. One hundred twenty-one patients (62.2 ± 16.5 yrs, Acute Physiology and Chronic Health Evaluation II 28.6 ± 7.7) had 373 electrocardiograms. Blinded to troponin, readers 1 and 2 interpreted 46.4% and 30.0% of electrocardiograms as normal, and 15.3% and 12.3% as ischemic, respectively. Intrarater agreement was moderate for overall ischemia (κ 0.54 and 0.58), moderate/good for "normal" (κ 0.69 and 0.55), fair to good for specific signs of ischemia (ST elevation, T inversion, and Q waves, reader 1 κ 0.40 to 0.69; reader 2 κ 0.56 to 0.70); and good/very good for atrial arrhythmias (κ 0.84 and 0.79) and bundle branch block (κ 0.88 and 0.79). Inter-rater agreement was fair for ischemia (κ 0.29), moderate for ST elevation (κ 0.48), T inversion (κ 0.52), and Q waves (κ 0.44), good for bundle branch block (κ 0.78), and very good for atrial arrhythmias (κ 0.83). Inter-rater agreement for ischemia improved

  13. Statistics of Low-Mass Companions to Stars: Implications for Their Origin

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Black, D. C.

    2001-01-01

    One of the more significant results from observational astronomy over the past few years has been the detection, primarily via radial velocity studies, of low-mass companions (LMCs) to solar-like stars. The commonly held interpretation of these is that the majority are "extrasolar planets" whereas the rest are brown dwarfs, the distinction made on the basis of apparent discontinuity in the distribution of M sin i for LMCs as revealed by a histogram. We report here results from statistical analysis of M sin i, as well as of the orbital elements data for available LMCs, to rest the assertion that the LMCs population is heterogeneous. The outcome is mixed. Solely on the basis of the distribution of M sin i a heterogeneous model is preferable. Overall, we find that a definitive statement asserting that LMCs population is heterogeneous is, at present, unjustified. In addition we compare statistics of LMCs with a comparable sample of stellar binaries. We find a remarkable statistical similarity between these two populations. This similarity coupled with marked populational dissimilarity between LMCs and acknowledged planets motivates us to suggest a common origin hypothesis for LMCs and stellar binaries as an alternative to the prevailing interpretation. We discuss merits of such a hypothesis and indicate a possible scenario for the formation of LMCs.

  14. MO-F-CAMPUS-I-01: Accuracy of Radiologists Interpretation of Mammographic Breast Density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vedantham, S; Shi, L; Karellas, A

    2015-06-15

    Purpose: Several commercial and non-commercial software and techniques are available for determining breast density from mammograms. However, where mandated by law the breast density information communicated to the subject/patient is based on radiologist’s interpretation of breast density from mammograms. Several studies have reported on the concordance among radiologists in interpreting mammographic breast density. In this work, we investigated the accuracy of radiologist’s interpretation of breast density. Methods: Volumetric breast density (VBD) determined from 134 unilateral dedicated breast CT scans from 134 subjects was considered the truth. An MQSA-qualified study radiologist with more than 20 years of breast imaging experience reviewedmore » the DICOM “for presentation” standard 2-view mammograms of the corresponding breasts and assigned BIRADS breast density categories. For statistical analysis, the breast density categories were dichotomized in two ways; fatty vs. dense breasts where “fatty” corresponds to BIRADS breast density categories A/B, and “dense” corresponds to BIRADS breast density categories C/D, and extremely dense vs. fatty to heterogeneously dense breasts, where extremely dense corresponds to BIRADS breast density category D and BIRADS breast density categories A through C were grouped as fatty to heterogeneously dense breasts. Logistic regression models (SAS 9.3) were used to determine the association between radiologist’s interpretation of breast density and VBD from breast CT, from which the area under the ROC (AUC) was determined. Results: Both logistic regression models were statistically significant (Likelihood Ratio test, p<0.0001). The accuracy (AUC) of the study radiologist for classification of fatty vs. dense breasts was 88.4% (95% CI: 83–94%) and for classification of extremely dense breast was 94.3% (95% CI: 90–98%). Conclusion: The accuracy of the radiologist in classifying dense and extremely dense

  15. Evaluation of 12 strategies for obtaining second opinions to improve interpretation of breast histopathology: simulation study

    PubMed Central

    Tosteson, Anna NA; Pepe, Margaret S; Longton, Gary M; Nelson, Heidi D; Geller, Berta; Carney, Patricia A; Onega, Tracy; Allison, Kimberly H; Jackson, Sara L; Weaver, Donald L

    2016-01-01

    Objective To evaluate the potential effect of second opinions on improving the accuracy of diagnostic interpretation of breast histopathology. Design Simulation study. Setting 12 different strategies for acquiring independent second opinions. Participants Interpretations of 240 breast biopsy specimens by 115 pathologists, one slide for each case, compared with reference diagnoses derived by expert consensus. Main outcome measures Misclassification rates for individual pathologists and for 12 simulated strategies for second opinions. Simulations compared accuracy of diagnoses from single pathologists with that of diagnoses based on pairing interpretations from first and second independent pathologists, where resolution of disagreements was by an independent third pathologist. 12 strategies were evaluated in which acquisition of second opinions depended on initial diagnoses, assessment of case difficulty or borderline characteristics, pathologists’ clinical volumes, or whether a second opinion was required by policy or desired by the pathologists. The 240 cases included benign without atypia (10% non-proliferative, 20% proliferative without atypia), atypia (30%), ductal carcinoma in situ (DCIS, 30%), and invasive cancer (10%). Overall misclassification rates and agreement statistics depended on the composition of the test set, which included a higher prevalence of difficult cases than in typical practice. Results Misclassification rates significantly decreased (P<0.001) with all second opinion strategies except for the strategy limiting second opinions only to cases of invasive cancer. The overall misclassification rate decreased from 24.7% to 18.1% when all cases received second opinions (P<0.001). Obtaining both first and second opinions from pathologists with a high volume (≥10 breast biopsy specimens weekly) resulted in the lowest misclassification rate in this test set (14.3%, 95% confidence interval 10.9% to 18.0%). Obtaining second opinions only for

  16. Effects of Non-Normal Outlier-Prone Error Distribution on Kalman Filter Track

    DTIC Science & Technology

    1991-09-01

    other possibilities exist. For example the GST (Generic Statistical Tracker) uses four motion models [Ref. 41. The GST keeps track of both the target...1.011 + + + 3.113 1.291 4 Although this procedure is not easily statistically interpretable, it was used for the sake of comparison with the other... TRANSITOR TARGET’ WRITE(6,*)’ 3 SECOND ORDER GAUSS MARKOV TARGET’ WRITE(6,*)’ 4 RANDOM TOUR TARGET’ READ(6,*) CHOICE IF((CHOICE.LT.1).OR.(CHOICE.GT.4

  17. Combining data visualization and statistical approaches for interpreting measurements and meta-data: Integrating heatmaps, variable clustering, and mixed regression models

    EPA Science Inventory

    The advent of new higher throughput analytical instrumentation has put a strain on interpreting and explaining the results from complex studies. Contemporary human, environmental, and biomonitoring data sets are comprised of tens or hundreds of analytes, multiple repeat measures...

  18. Knowledge-base for interpretation of cerebrospinal fluid data patterns. Essentials in neurology and psychiatry.

    PubMed

    Reiber, Hansotto

    2016-06-01

    The physiological and biophysical knowledge base for interpretations of cerebrospinal fluid (CSF) data and reference ranges are essential for the clinical pathologist and neurochemist. With the popular description of the CSF flow dependent barrier function, the dynamics and concentration gradients of blood-derived, brain-derived and leptomeningeal proteins in CSF or the specificity-independent functions of B-lymphocytes in brain also the neurologist, psychiatrist, neurosurgeon as well as the neuropharmacologist may find essentials for diagnosis, research or development of therapies. This review may help to replace the outdated ideas like "leakage" models of the barriers, linear immunoglobulin Index Interpretations or CSF electrophoresis. Calculations, Interpretations and analytical pitfalls are described for albumin quotients, quantitation of immunoglobulin synthesis in Reibergrams, oligoclonal IgG, IgM analysis, the polyspecific ( MRZ- ) antibody reaction, the statistical treatment of CSF data and general quality assessment in the CSF laboratory. The diagnostic relevance is documented in an accompaning review.

  19. Density profiles in the Scrape-Off Layer interpreted through filament dynamics

    NASA Astrophysics Data System (ADS)

    Militello, Fulvio

    2017-10-01

    We developed a new theoretical framework to clarify the relation between radial Scrape-Off Layer density profiles and the fluctuations that generate them. The framework provides an interpretation of the experimental features of the profiles and of the turbulence statistics on the basis of simple properties of the filaments, such as their radial motion and their draining towards the divertor. L-mode and inter-ELM filaments are described as a Poisson process in which each event is independent and modelled with a wave function of amplitude and width statistically distributed according to experimental observations and evolving according to fluid equations. We will rigorously show that radially accelerating filaments, less efficient parallel exhaust and also a statistical distribution of their radial velocity can contribute to induce flatter profiles in the far SOL and therefore enhance plasma-wall interactions. A quite general result of our analysis is the resiliency of this non-exponential nature of the profiles and the increase of the relative fluctuation amplitude towards the wall, as experimentally observed. According to the framework, profile broadening at high fueling rates can be caused by interactions with neutrals (e.g. charge exchange) in the divertor or by a significant radial acceleration of the filaments. The framework assumptions were tested with 3D numerical simulations of seeded SOL filaments based on a two fluid model. In particular, filaments interact through the electrostatic field they generate only when they are in close proximity (separation comparable to their width in the drift plane), thus justifying our independence hypothesis. In addition, we will discuss how isolated filament motion responds to variations in the plasma conditions, and specifically divertor conditions. Finally, using the theoretical framework we will reproduce and interpret experimental results obtained on JET, MAST and HL-2A.

  20. Homeostasis and Gauss statistics: barriers to understanding natural variability.

    PubMed

    West, Bruce J

    2010-06-01

    In this paper, the concept of knowledge is argued to be the top of a three-tiered system of science. The first tier is that of measurement and data, followed by information consisting of the patterns within the data, and ending with theory that interprets the patterns and yields knowledge. Thus, when a scientific theory ceases to be consistent with the database the knowledge based on that theory must be re-examined and potentially modified. Consequently, all knowledge, like glory, is transient. Herein we focus on the non-normal statistics of physiologic time series and conclude that the empirical inverse power-law statistics and long-time correlations are inconsistent with the theoretical notion of homeostasis. We suggest replacing the notion of homeostasis with that of Fractal Physiology.

  1. Statistical foundations of liquid-crystal theory

    PubMed Central

    Seguin, Brian; Fried, Eliot

    2013-01-01

    Working on a state space determined by considering a discrete system of rigid rods, we use nonequilibrium statistical mechanics to derive macroscopic balance laws for liquid crystals. A probability function that satisfies the Liouville equation serves as the starting point for deriving each macroscopic balance. The terms appearing in the derived balances are interpreted as expected values and explicit formulas for these terms are obtained. Among the list of derived balances appear two, the tensor moment of inertia balance and the mesofluctuation balance, that are not standard in previously proposed macroscopic theories for liquid crystals but which have precedents in other theories for structured media. PMID:23554513

  2. Medical Interpreters in Outpatient Practice.

    PubMed

    Jacobs, Barb; Ryan, Anne M; Henrichs, Katherine S; Weiss, Barry D

    2018-01-01

    This article provides an overview of the federal requirements related to providing interpreter services for non-English-speaking patients in outpatient practice. Antidiscrimination provisions in federal law require health programs and clinicians receiving federal financial assistance to take reasonable steps to provide meaningful access to individuals with limited English proficiency who are eligible for or likely to be encountered in their health programs or activities. Federal financial assistance includes grants, contracts, loans, tax credits and subsidies, as well as payments through Medicaid, the Children's Health Insurance Program, and most Medicare programs. The only exception is providers whose only federal assistance is through Medicare Part B, an exception that applies to a very small percentage of practicing physicians. All required language assistance services must be free and provided by qualified translators and interpreters. Interpreters must meet specified qualifications and ideally be certified. Although the cost of interpreter services can be considerable, ranging from $45-$150/hour for in-person interpreters, to $1.25-$3.00/minute for telephone interpreters, and $1.95-$3.49/minute for video remote interpreting, it may be reimbursed or covered by a patient's Medicaid or other federally funded medical insurance. Failure to use qualified interpreters can have serious negative consequences for both practitioners and patients. In one study, 1 of every 40 malpractice claims were related, all or in part, to failure to provide appropriate interpreter services. Most importantly, however, the use of qualified interpreters results in better and more efficient patient care. © 2018 Annals of Family Medicine, Inc.

  3. Interobserver reproducibility in pathologist interpretation of columnar-lined esophagus.

    PubMed

    Mastracci, Luca; Piol, Nataniele; Molinaro, Luca; Pitto, Francesca; Tinelli, Carmine; De Silvestri, Annalisa; Fiocca, Roberto; Grillo, Federica

    2016-02-01

    Confirmation of endoscopically suspected esophageal metaplasia (ESEM) requires histology, but confusion in the histological definition of columnar-lined esophagus (CLE) is a longstanding problem. The aim of this study is to evaluate interpathologist variability in the interpretation of CLE. Thirty pathologists were invited to review three ten-case sets of CLE biopsies. In the first set, the cases were provided with descriptive endoscopy only; in the second and the third sets, ESEM extent using Prague criteria was provided. Moreover, participants were required to refer to a diagnostic chart for evaluation of the third set. Agreement was statistically assessed using Randolph's free-marginal multirater kappa. While substantial agreement in recognizing columnar epithelium (K = 0.76) was recorded, the overall concordance in clinico-pathological diagnosis was low (K = 0.38). The overall concordance rate improved from the first (K = 0.27) to the second (K = 0.40) and third step (K = 0.46). Agreement was substantial when diagnosing Barrett's esophagus (BE) with intestinal metaplasia or inlet patch (K = 0.65 and K = 0.89), respectively, in the third step, while major problems in interpretation of CLE were observed when only cardia/cardia-oxyntic atrophic-type epithelium was present (K = 0.05-0.29). In conclusion, precise endoscopic description and the use of a diagnostic chart increased consistency in CLE interpretation of esophageal biopsies. Agreement was substantial for some diagnostic categories (BE with intestinal metaplasia and inlet patch) with a well-defined clinical profile. Interpretation of cases with cardia/cardia-oxyntic atrophic-type epithelium, with or without ESEM, was least consistent, which reflects lack of clarity of definition and results in variable management of this entity.

  4. Functional brain networks for learning predictive statistics.

    PubMed

    Giorgio, Joseph; Karlaftis, Vasilis M; Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew; Kourtzi, Zoe

    2017-08-18

    Making predictions about future events relies on interpreting streams of information that may initially appear incomprehensible. This skill relies on extracting regular patterns in space and time by mere exposure to the environment (i.e., without explicit feedback). Yet, we know little about the functional brain networks that mediate this type of statistical learning. Here, we test whether changes in the processing and connectivity of functional brain networks due to training relate to our ability to learn temporal regularities. By combining behavioral training and functional brain connectivity analysis, we demonstrate that individuals adapt to the environment's statistics as they change over time from simple repetition to probabilistic combinations. Further, we show that individual learning of temporal structures relates to decision strategy. Our fMRI results demonstrate that learning-dependent changes in fMRI activation within and functional connectivity between brain networks relate to individual variability in strategy. In particular, extracting the exact sequence statistics (i.e., matching) relates to changes in brain networks known to be involved in memory and stimulus-response associations, while selecting the most probable outcomes in a given context (i.e., maximizing) relates to changes in frontal and striatal networks. Thus, our findings provide evidence that dissociable brain networks mediate individual ability in learning behaviorally-relevant statistics. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. A statistical approach to the interpretation of aliphatic hydrocarbon distributions in marine sediments

    USGS Publications Warehouse

    Rapp, J.B.

    1991-01-01

    Q-mode factor analysis was used to quantitate the distribution of the major aliphatic hydrocarbon (n-alkanes, pristane, phytane) systems in sediments from a variety of marine environments. The compositions of the pure end members of the systems were obtained from factor scores and the distribution of the systems within each sample was obtained from factor loadings. All the data, from the diverse environments sampled (estuarine (San Francisco Bay), fresh-water (San Francisco Peninsula), polar-marine (Antarctica) and geothermal-marine (Gorda Ridge) sediments), were reduced to three major systems: a terrestrial system (mostly high molecular weight aliphatics with odd-numbered-carbon predominance), a mature system (mostly low molecular weight aliphatics without predominance) and a system containing mostly high molecular weight aliphatics with even-numbered-carbon predominance. With this statistical approach, it is possible to assign the percentage contribution from various sources to the observed distribution of aliphatic hydrocarbons in each sediment sample. ?? 1991.

  6. DAnTE: a statistical tool for quantitative analysis of –omics data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polpitiya, Ashoka D.; Qian, Weijun; Jaitly, Navdeep

    2008-05-03

    DAnTE (Data Analysis Tool Extension) is a statistical tool designed to address challenges unique to quantitative bottom-up, shotgun proteomics data. This tool has also been demonstrated for microarray data and can easily be extended to other high-throughput data types. DAnTE features selected normalization methods, missing value imputation algorithms, peptide to protein rollup methods, an extensive array of plotting functions, and a comprehensive ANOVA scheme that can handle unbalanced data and random effects. The Graphical User Interface (GUI) is designed to be very intuitive and user friendly.

  7. Prediction of transmission loss through an aircraft sidewall using statistical energy analysis

    NASA Astrophysics Data System (ADS)

    Ming, Ruisen; Sun, Jincai

    1989-06-01

    The transmission loss of randomly incident sound through an aircraft sidewall is investigated using statistical energy analysis. Formulas are also obtained for the simple calculation of sound transmission loss through single- and double-leaf panels. Both resonant and nonresonant sound transmissions can be easily calculated using the formulas. The formulas are used to predict sound transmission losses through a Y-7 propeller airplane panel. The panel measures 2.56 m x 1.38 m and has two windows. The agreement between predicted and measured values through most of the frequency ranges tested is quite good.

  8. Negative values of quasidistributions and quantum wave and number statistics

    NASA Astrophysics Data System (ADS)

    Peřina, J.; Křepelka, J.

    2018-04-01

    We consider nonclassical wave and number quantum statistics, and perform a decomposition of quasidistributions for nonlinear optical down-conversion processes using Bessel functions. We show that negative values of the quasidistribution do not directly represent probabilities; however, they directly influence measurable number statistics. Negative terms in the decomposition related to the nonclassical behavior with negative amplitudes of probability can be interpreted as positive amplitudes of probability in the negative orthogonal Bessel basis, whereas positive amplitudes of probability in the positive basis describe classical cases. However, probabilities are positive in all cases, including negative values of quasidistributions. Negative and positive contributions of decompositions to quasidistributions are estimated. The approach can be adapted to quantum coherence functions.

  9. User manual for Blossom statistical package for R

    USGS Publications Warehouse

    Talbert, Marian; Cade, Brian S.

    2005-01-01

    Blossom is an R package with functions for making statistical comparisons with distance-function based permutation tests developed by P.W. Mielke, Jr. and colleagues at Colorado State University (Mielke and Berry, 2001) and for testing parameters estimated in linear models with permutation procedures developed by B. S. Cade and colleagues at the Fort Collins Science Center, U.S. Geological Survey. This manual is intended to provide identical documentation of the statistical methods and interpretations as the manual by Cade and Richards (2005) does for the original Fortran program, but with changes made with respect to command inputs and outputs to reflect the new implementation as a package for R (R Development Core Team, 2012). This implementation in R has allowed for numerous improvements not supported by the Cade and Richards (2005) Fortran implementation, including use of categorical predictor variables in most routines.

  10. Statistics for laminar flamelet modeling

    NASA Technical Reports Server (NTRS)

    Cant, R. S.; Rutland, C. J.; Trouve, A.

    1990-01-01

    Statistical information required to support modeling of turbulent premixed combustion by laminar flamelet methods is extracted from a database of the results of Direct Numerical Simulation of turbulent flames. The simulations were carried out previously by Rutland (1989) using a pseudo-spectral code on a three dimensional mesh of 128 points in each direction. One-step Arrhenius chemistry was employed together with small heat release. A framework for the interpretation of the data is provided by the Bray-Moss-Libby model for the mean turbulent reaction rate. Probability density functions are obtained over surfaces of the constant reaction progress variable for the tangential strain rate and the principal curvature. New insights are gained which will greatly aid the development of modeling approaches.

  11. A study of statistics anxiety levels of graduate dental hygiene students.

    PubMed

    Welch, Paul S; Jacks, Mary E; Smiley, Lynn A; Walden, Carolyn E; Clark, William D; Nguyen, Carol A

    2015-02-01

    In light of increased emphasis on evidence-based practice in the profession of dental hygiene, it is important that today's dental hygienist comprehend statistical measures to fully understand research articles, and thereby apply scientific evidence to practice. Therefore, the purpose of this study was to investigate statistics anxiety among graduate dental hygiene students in the U.S. A web-based self-report, anonymous survey was emailed to directors of 17 MSDH programs in the U.S. with a request to distribute to graduate students. The survey collected data on statistics anxiety, sociodemographic characteristics and evidence-based practice. Statistic anxiety was assessed using the Statistical Anxiety Rating Scale. Study significance level was α=0.05. Only 8 of the 17 invited programs participated in the study. Statistical Anxiety Rating Scale data revealed graduate dental hygiene students experience low to moderate levels of statistics anxiety. Specifically, the level of anxiety on the Interpretation Anxiety factor indicated this population could struggle with making sense of scientific research. A decisive majority (92%) of students indicated statistics is essential for evidence-based practice and should be a required course for all dental hygienists. This study served to identify statistics anxiety in a previously unexplored population. The findings should be useful in both theory building and in practical applications. Furthermore, the results can be used to direct future research. Copyright © 2015 The American Dental Hygienists’ Association.

  12. Health significance and statistical uncertainty. The value of P-value.

    PubMed

    Consonni, Dario; Bertazzi, Pier Alberto

    2017-10-27

    The P-value is widely used as a summary statistics of scientific results. Unfortunately, there is a widespread tendency to dichotomize its value in "P<0.05" (defined as "statistically significant") and "P>0.05" ("statistically not significant"), with the former implying a "positive" result and the latter a "negative" one. To show the unsuitability of such an approach when evaluating the effects of environmental and occupational risk factors. We provide examples of distorted use of P-value and of the negative consequences for science and public health of such a black-and-white vision. The rigid interpretation of P-value as a dichotomy favors the confusion between health relevance and statistical significance, discourages thoughtful thinking, and distorts attention from what really matters, the health significance. A much better way to express and communicate scientific results involves reporting effect estimates (e.g., risks, risks ratios or risk differences) and their confidence intervals (CI), which summarize and convey both health significance and statistical uncertainty. Unfortunately, many researchers do not usually consider the whole interval of CI but only examine if it includes the null-value, therefore degrading this procedure to the same P-value dichotomy (statistical significance or not). In reporting statistical results of scientific research present effects estimates with their confidence intervals and do not qualify the P-value as "significant" or "not significant".

  13. Rational integration of noisy evidence and prior semantic expectations in sentence interpretation.

    PubMed

    Gibson, Edward; Bergen, Leon; Piantadosi, Steven T

    2013-05-14

    Sentence processing theories typically assume that the input to our language processing mechanisms is an error-free sequence of words. However, this assumption is an oversimplification because noise is present in typical language use (for instance, due to a noisy environment, producer errors, or perceiver errors). A complete theory of human sentence comprehension therefore needs to explain how humans understand language given imperfect input. Indeed, like many cognitive systems, language processing mechanisms may even be "well designed"--in this case for the task of recovering intended meaning from noisy utterances. In particular, comprehension mechanisms may be sensitive to the types of information that an idealized statistical comprehender would be sensitive to. Here, we evaluate four predictions about such a rational (Bayesian) noisy-channel language comprehender in a sentence comprehension task: (i) semantic cues should pull sentence interpretation towards plausible meanings, especially if the wording of the more plausible meaning is close to the observed utterance in terms of the number of edits; (ii) this process should asymmetrically treat insertions and deletions due to the Bayesian "size principle"; such nonliteral interpretation of sentences should (iii) increase with the perceived noise rate of the communicative situation and (iv) decrease if semantically anomalous meanings are more likely to be communicated. These predictions are borne out, strongly suggesting that human language relies on rational statistical inference over a noisy channel.

  14. Rational integration of noisy evidence and prior semantic expectations in sentence interpretation

    PubMed Central

    Gibson, Edward; Bergen, Leon; Piantadosi, Steven T.

    2013-01-01

    Sentence processing theories typically assume that the input to our language processing mechanisms is an error-free sequence of words. However, this assumption is an oversimplification because noise is present in typical language use (for instance, due to a noisy environment, producer errors, or perceiver errors). A complete theory of human sentence comprehension therefore needs to explain how humans understand language given imperfect input. Indeed, like many cognitive systems, language processing mechanisms may even be “well designed”–in this case for the task of recovering intended meaning from noisy utterances. In particular, comprehension mechanisms may be sensitive to the types of information that an idealized statistical comprehender would be sensitive to. Here, we evaluate four predictions about such a rational (Bayesian) noisy-channel language comprehender in a sentence comprehension task: (i) semantic cues should pull sentence interpretation towards plausible meanings, especially if the wording of the more plausible meaning is close to the observed utterance in terms of the number of edits; (ii) this process should asymmetrically treat insertions and deletions due to the Bayesian “size principle”; such nonliteral interpretation of sentences should (iii) increase with the perceived noise rate of the communicative situation and (iv) decrease if semantically anomalous meanings are more likely to be communicated. These predictions are borne out, strongly suggesting that human language relies on rational statistical inference over a noisy channel. PMID:23637344

  15. Sparse approximation of currents for statistics on curves and surfaces.

    PubMed

    Durrleman, Stanley; Pennec, Xavier; Trouvé, Alain; Ayache, Nicholas

    2008-01-01

    Computing, processing, visualizing statistics on shapes like curves or surfaces is a real challenge with many applications ranging from medical image analysis to computational geometry. Modelling such geometrical primitives with currents avoids feature-based approach as well as point-correspondence method. This framework has been proved to be powerful to register brain surfaces or to measure geometrical invariants. However, if the state-of-the-art methods perform efficiently pairwise registrations, new numerical schemes are required to process groupwise statistics due to an increasing complexity when the size of the database is growing. Statistics such as mean and principal modes of a set of shapes often have a heavy and highly redundant representation. We propose therefore to find an adapted basis on which mean and principal modes have a sparse decomposition. Besides the computational improvement, this sparse representation offers a way to visualize and interpret statistics on currents. Experiments show the relevance of the approach on 34 sets of 70 sulcal lines and on 50 sets of 10 meshes of deep brain structures.

  16. A bibliometric analysis of statistical terms used in American Physical Therapy Association journals (2011-2012): evidence for educating physical therapists.

    PubMed

    Tilson, Julie K; Marshall, Katie; Tam, Jodi J; Fetters, Linda

    2016-04-22

    A primary barrier to the implementation of evidence based practice (EBP) in physical therapy is therapists' limited ability to understand and interpret statistics. Physical therapists demonstrate limited skills and report low self-efficacy for interpreting results of statistical procedures. While standards for physical therapist education include statistics, little empirical evidence is available to inform what should constitute such curricula. The purpose of this study was to conduct a census of the statistical terms and study designs used in physical therapy literature and to use the results to make recommendations for curricular development in physical therapist education. We conducted a bibliometric analysis of 14 peer-reviewed journals associated with the American Physical Therapy Association over 12 months (Oct 2011-Sept 2012). Trained raters recorded every statistical term appearing in identified systematic reviews, primary research reports, and case series and case reports. Investigator-reported study design was also recorded. Terms representing the same statistical test or concept were combined into a single, representative term. Cumulative percentage was used to identify the most common representative statistical terms. Common representative terms were organized into eight categories to inform curricular design. Of 485 articles reviewed, 391 met the inclusion criteria. These 391 articles used 532 different terms which were combined into 321 representative terms; 13.1 (sd = 8.0) terms per article. Eighty-one representative terms constituted 90% of all representative term occurrences. Of the remaining 240 representative terms, 105 (44%) were used in only one article. The most common study design was prospective cohort (32.5%). Physical therapy literature contains a large number of statistical terms and concepts for readers to navigate. However, in the year sampled, 81 representative terms accounted for 90% of all occurrences. These "common

  17. Parameter Estimation as a Problem in Statistical Thermodynamics.

    PubMed

    Earle, Keith A; Schneider, David J

    2011-03-14

    In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.

  18. Improving Quality in Teaching Statistics Concepts Using Modern Visualization: The Design and Use of the Flash Application on Pocket PCs

    ERIC Educational Resources Information Center

    Vaughn, Brandon K.; Wang, Pei-Yu

    2009-01-01

    The emergence of technology has led to numerous changes in mathematical and statistical teaching and learning which has improved the quality of instruction and teacher/student interactions. The teaching of statistics, for example, has shifted from mathematical calculations to higher level cognitive abilities such as reasoning, interpretation, and…

  19. Statistical Model Selection for TID Hardness Assurance

    NASA Technical Reports Server (NTRS)

    Ladbury, R.; Gorelick, J. L.; McClure, S.

    2010-01-01

    Radiation Hardness Assurance (RHA) methodologies against Total Ionizing Dose (TID) degradation impose rigorous statistical treatments for data from a part's Radiation Lot Acceptance Test (RLAT) and/or its historical performance. However, no similar methods exist for using "similarity" data - that is, data for similar parts fabricated in the same process as the part under qualification. This is despite the greater difficulty and potential risk in interpreting of similarity data. In this work, we develop methods to disentangle part-to-part, lot-to-lot and part-type-to-part-type variation. The methods we develop apply not just for qualification decisions, but also for quality control and detection of process changes and other "out-of-family" behavior. We begin by discussing the data used in ·the study and the challenges of developing a statistic providing a meaningful measure of degradation across multiple part types, each with its own performance specifications. We then develop analysis techniques and apply them to the different data sets.

  20. [Do different interpretative methods used for evaluation of checkerboard synergy test affect the results?].

    PubMed

    Ozseven, Ayşe Gül; Sesli Çetin, Emel; Ozseven, Levent

    2012-07-01

    96-well checkerboard plates. The results were obtained separately using the four different interpretation methods frequently preferred by researchers. Thus, it was aimed to detect to what extent the rates of synergistic, indifferent and antagonistic interactions were affected by different interpretation methods. The differences between the interpretation methods were tested by chi-square analysis for each combination used. Statistically significant differences were detected between the four different interpretation methods for the determination of synergistic and indifferent interactions (p< 0.0001). Highest rates of synergy were observed with both combinations by the method that used the lowest fractional inhibitory concentration index of all the non-turbid wells along the turbidity/non-turbidity interface. There was no statistically significant difference between the four methods for the detection of antagonism (p> 0.05). In conclusion although there is a standard procedure for checkerboard synergy testing it fails to exhibit standard results owing to different methods of interpretation of the results. Thus, there is a need to standardise the interpretation method for checkerboard synergy testing. To determine the most appropriate method of interpretation further studies investigating the clinical benefits of synergic combinations and additionally comparing the consistency of the results obtained from the other standard combination tests like time-kill studies, are required.

  1. Dark Matter interpretation of low energy IceCube MESE excess

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chianese, M.; Miele, G.; Morisi, S., E-mail: chianese@na.infn.it, E-mail: miele@na.infn.it, E-mail: stefano.morisi@na.infn.it

    2017-01-01

    The 2-years MESE IceCube events show a slightly excess in the energy range 10–100 TeV with a maximum local statistical significance of 2.3σ, once a hard astrophysical power-law is assumed. A spectral index smaller than 2.2 is indeed suggested by multi-messenger studies related to p - p sources and by the recent IceCube analysis regarding 6-years up-going muon neutrinos. In the present paper, we propose a two-components scenario where the extraterrestrial neutrinos are explained in terms of an astrophysical power-law and a Dark Matter signal. We consider both decaying and annihilating Dark Matter candidates with different final states (quarks andmore » leptons) and different halo density profiles. We perform a likelihood-ratio analysis that provides a statistical significance up to 3.9σ for a Dark Matter interpretation of the IceCube low energy excess.« less

  2. The use of statistical tools in field testing of putative effects of genetically modified plants on nontarget organisms

    PubMed Central

    Semenov, Alexander V; Elsas, Jan Dirk; Glandorf, Debora C M; Schilthuizen, Menno; Boer, Willem F

    2013-01-01

    Abstract To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches – for example, analysis of variance (ANOVA) – are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied. We offer generic advice to risk assessors and

  3. The use of statistical tools in field testing of putative effects of genetically modified plants on nontarget organisms.

    PubMed

    Semenov, Alexander V; Elsas, Jan Dirk; Glandorf, Debora C M; Schilthuizen, Menno; Boer, Willem F

    2013-08-01

    To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches - for example, analysis of variance (ANOVA) - are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied. We offer generic advice to risk assessors and applicants that will

  4. Preparation and Use of an Easily Constructed, Inexpensive Chamber for Viewing Courtship Behaviors of Fruit Flies, Drosophila sp.

    ERIC Educational Resources Information Center

    Christensen, Timothy J.; Labov, Jay B.

    1997-01-01

    Details the construction of a viewing chamber for fruit flies that connects to a dissecting microscope and features a design that enables students to easily move fruit flies in and out of the chamber. (DDR)

  5. 77 FR 39654 - Proposed Legal Interpretation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-05

    ...-0670] Proposed Legal Interpretation AGENCY: Federal Aviation Administration (FAA), DOT. ACTION... legal interpretation in which the agency considered clarifying prior legal interpretations regarding... inadvertently assigned an incorrect docket number to the proposed legal interpretation. This document corrects...

  6. Truths, lies, and statistics.

    PubMed

    Thiese, Matthew S; Walker, Skyler; Lindsey, Jenna

    2017-10-01

    Distribution of valuable research discoveries are needed for the continual advancement of patient care. Publication and subsequent reliance of false study results would be detrimental for patient care. Unfortunately, research misconduct may originate from many sources. While there is evidence of ongoing research misconduct in all it's forms, it is challenging to identify the actual occurrence of research misconduct, which is especially true for misconduct in clinical trials. Research misconduct is challenging to measure and there are few studies reporting the prevalence or underlying causes of research misconduct among biomedical researchers. Reported prevalence estimates of misconduct are probably underestimates, and range from 0.3% to 4.9%. There have been efforts to measure the prevalence of research misconduct; however, the relatively few published studies are not freely comparable because of varying characterizations of research misconduct and the methods used for data collection. There are some signs which may point to an increased possibility of research misconduct, however there is a need for continued self-policing by biomedical researchers. There are existing resources to assist in ensuring appropriate statistical methods and preventing other types of research fraud. These included the "Statistical Analyses and Methods in the Published Literature", also known as the SAMPL guidelines, which help scientists determine the appropriate method of reporting various statistical methods; the "Strengthening Analytical Thinking for Observational Studies", or the STRATOS, which emphases on execution and interpretation of results; and the Committee on Publication Ethics (COPE), which was created in 1997 to deliver guidance about publication ethics. COPE has a sequence of views and strategies grounded in the values of honesty and accuracy.

  7. Simulations for designing and interpreting intervention trials in infectious diseases.

    PubMed

    Halloran, M Elizabeth; Auranen, Kari; Baird, Sarah; Basta, Nicole E; Bellan, Steven E; Brookmeyer, Ron; Cooper, Ben S; DeGruttola, Victor; Hughes, James P; Lessler, Justin; Lofgren, Eric T; Longini, Ira M; Onnela, Jukka-Pekka; Özler, Berk; Seage, George R; Smith, Thomas A; Vespignani, Alessandro; Vynnycky, Emilia; Lipsitch, Marc

    2017-12-29

    Interventions in infectious diseases can have both direct effects on individuals who receive the intervention as well as indirect effects in the population. In addition, intervention combinations can have complex interactions at the population level, which are often difficult to adequately assess with standard study designs and analytical methods. Herein, we urge the adoption of a new paradigm for the design and interpretation of intervention trials in infectious diseases, particularly with regard to emerging infectious diseases, one that more accurately reflects the dynamics of the transmission process. In an increasingly complex world, simulations can explicitly represent transmission dynamics, which are critical for proper trial design and interpretation. Certain ethical aspects of a trial can also be quantified using simulations. Further, after a trial has been conducted, simulations can be used to explore the possible explanations for the observed effects. Much is to be gained through a multidisciplinary approach that builds collaborations among experts in infectious disease dynamics, epidemiology, statistical science, economics, simulation methods, and the conduct of clinical trials.

  8. Patients and medical statistics. Interest, confidence, and ability.

    PubMed

    Woloshin, Steven; Schwartz, Lisa M; Welch, H Gilbert

    2005-11-01

    People are increasingly presented with medical statistics. There are no existing measures to assess their level of interest or confidence in using medical statistics. To develop 2 new measures, the STAT-interest and STAT-confidence scales, and assess their reliability and validity. Survey with retest after approximately 2 weeks. Two hundred and twenty-four people were recruited from advertisements in local newspapers, an outpatient clinic waiting area, and a hospital open house. We developed and revised 5 items on interest in medical statistics and 3 on confidence understanding statistics. Study participants were mostly college graduates (52%); 25% had a high school education or less. The mean age was 53 (range 20 to 84) years. Most paid attention to medical statistics (6% paid no attention). The mean (SD) STAT-interest score was 68 (17) and ranged from 15 to 100. Confidence in using statistics was also high: the mean (SD) STAT-confidence score was 65 (19) and ranged from 11 to 100. STAT-interest and STAT-confidence scores were moderately correlated (r=.36, P<.001). Both scales demonstrated good test-retest repeatability (r=.60, .62, respectively), internal consistency reliability (Cronbach's alpha=0.70 and 0.78), and usability (individual item nonresponse ranged from 0% to 1.3%). Scale scores correlated only weakly with scores on a medical data interpretation test (r=.15 and .26, respectively). The STAT-interest and STAT-confidence scales are usable and reliable. Interest and confidence were only weakly related to the ability to actually use data.

  9. Infinite-mode squeezed coherent states and non-equilibrium statistical mechanics (phase-space-picture approach)

    NASA Technical Reports Server (NTRS)

    Yeh, Leehwa

    1993-01-01

    The phase-space-picture approach to quantum non-equilibrium statistical mechanics via the characteristic function of infinite-mode squeezed coherent states is introduced. We use quantum Brownian motion as an example to show how this approach provides an interesting geometrical interpretation of quantum non-equilibrium phenomena.

  10. 10 CFR 40.6 - Interpretations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Interpretations. 40.6 Section 40.6 Energy NUCLEAR REGULATORY COMMISSION DOMESTIC LICENSING OF SOURCE MATERIAL General Provisions § 40.6 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...

  11. 10 CFR 76.6 - Interpretations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Interpretations. 76.6 Section 76.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.6 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning...

  12. 10 CFR 76.6 - Interpretations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Interpretations. 76.6 Section 76.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.6 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning...

  13. 10 CFR 76.6 - Interpretations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Interpretations. 76.6 Section 76.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.6 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning...

  14. 10 CFR 76.6 - Interpretations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Interpretations. 76.6 Section 76.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.6 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning...

  15. 10 CFR 76.6 - Interpretations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Interpretations. 76.6 Section 76.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CERTIFICATION OF GASEOUS DIFFUSION PLANTS General Provisions § 76.6 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning...

  16. 10 CFR 55.6 - Interpretations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Interpretations. 55.6 Section 55.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) OPERATORS' LICENSES General Provisions § 55.6 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in...

  17. 10 CFR 1016.7 - Interpretations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Interpretations. 1016.7 Section 1016.7 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA General Provisions § 1016.7 Interpretations. Except as specifically authorized by the Secretary of Energy in writing, no interpretation of the...

  18. 10 CFR 1016.7 - Interpretations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Interpretations. 1016.7 Section 1016.7 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA General Provisions § 1016.7 Interpretations. Except as specifically authorized by the Secretary of Energy in writing, no interpretation of the...

  19. 10 CFR 20.1006 - Interpretations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...

  20. 10 CFR 20.1006 - Interpretations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...

  1. 10 CFR 20.1006 - Interpretations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...

  2. 10 CFR 20.1006 - Interpretations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...

  3. 10 CFR 20.1006 - Interpretations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...

  4. 10 CFR 55.6 - Interpretations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Interpretations. 55.6 Section 55.6 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) OPERATORS' LICENSES General Provisions § 55.6 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in...

  5. Statistics and Informatics in Space Astrophysics

    NASA Astrophysics Data System (ADS)

    Feigelson, E.

    2017-12-01

    The interest in statistical and computational methodology has seen rapid growth in space-based astrophysics, parallel to the growth seen in Earth remote sensing. There is widespread agreement that scientific interpretation of the cosmic microwave background, discovery of exoplanets, and classifying multiwavelength surveys is too complex to be accomplished with traditional techniques. NASA operates several well-functioning Science Archive Research Centers providing 0.5 PBy datasets to the research community. These databases are integrated with full-text journal articles in the NASA Astrophysics Data System (200K pageviews/day). Data products use interoperable formats and protocols established by the International Virtual Observatory Alliance. NASA supercomputers also support complex astrophysical models of systems such as accretion disks and planet formation. Academic researcher interest in methodology has significantly grown in areas such as Bayesian inference and machine learning, and statistical research is underway to treat problems such as irregularly spaced time series and astrophysical model uncertainties. Several scholarly societies have created interest groups in astrostatistics and astroinformatics. Improvements are needed on several fronts. Community education in advanced methodology is not sufficiently rapid to meet the research needs. Statistical procedures within NASA science analysis software are sometimes not optimal, and pipeline development may not use modern software engineering techniques. NASA offers few grant opportunities supporting research in astroinformatics and astrostatistics.

  6. Interpretive Medicine

    PubMed Central

    Reeve, Joanne

    2010-01-01

    Patient-centredness is a core value of general practice; it is defined as the interpersonal processes that support the holistic care of individuals. To date, efforts to demonstrate their relationship to patient outcomes have been disappointing, whilst some studies suggest values may be more rhetoric than reality. Contextual issues influence the quality of patient-centred consultations, impacting on outcomes. The legitimate use of knowledge, or evidence, is a defining aspect of modern practice, and has implications for patient-centredness. Based on a critical review of the literature, on my own empirical research, and on reflections from my clinical practice, I critique current models of the use of knowledge in supporting individualised care. Evidence-Based Medicine (EBM), and its implementation within health policy as Scientific Bureaucratic Medicine (SBM), define best evidence in terms of an epistemological emphasis on scientific knowledge over clinical experience. It provides objective knowledge of disease, including quantitative estimates of the certainty of that knowledge. Whilst arguably appropriate for secondary care, involving episodic care of selected populations referred in for specialist diagnosis and treatment of disease, application to general practice can be questioned given the complex, dynamic and uncertain nature of much of the illness that is treated. I propose that general practice is better described by a model of Interpretive Medicine (IM): the critical, thoughtful, professional use of an appropriate range of knowledges in the dynamic, shared exploration and interpretation of individual illness experience, in order to support the creative capacity of individuals in maintaining their daily lives. Whilst the generation of interpreted knowledge is an essential part of daily general practice, the profession does not have an adequate framework by which this activity can be externally judged to have been done well. Drawing on theory related to the

  7. 10 CFR 26.7 - Interpretations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Interpretations. 26.7 Section 26.7 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Administrative Provisions § 26.7 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in...

  8. 10 CFR 26.7 - Interpretations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Interpretations. 26.7 Section 26.7 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Administrative Provisions § 26.7 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in...

  9. 10 CFR 26.7 - Interpretations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Interpretations. 26.7 Section 26.7 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Administrative Provisions § 26.7 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in...

  10. 10 CFR 26.7 - Interpretations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Interpretations. 26.7 Section 26.7 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Administrative Provisions § 26.7 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in...

  11. 10 CFR 26.7 - Interpretations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Interpretations. 26.7 Section 26.7 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Administrative Provisions § 26.7 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in...

  12. Ensemble Classifiers for Predicting HIV-1 Resistance from Three Rule-Based Genotypic Resistance Interpretation Systems.

    PubMed

    Raposo, Letícia M; Nobre, Flavio F

    2017-08-30

    Resistance to antiretrovirals (ARVs) is a major problem faced by HIV-infected individuals. Different rule-based algorithms were developed to infer HIV-1 susceptibility to antiretrovirals from genotypic data. However, there is discordance between them, resulting in difficulties for clinical decisions about which treatment to use. Here, we developed ensemble classifiers integrating three interpretation algorithms: Agence Nationale de Recherche sur le SIDA (ANRS), Rega, and the genotypic resistance interpretation system from Stanford HIV Drug Resistance Database (HIVdb). Three approaches were applied to develop a classifier with a single resistance profile: stacked generalization, a simple plurality vote scheme and the selection of the interpretation system with the best performance. The strategies were compared with the Friedman's test and the performance of the classifiers was evaluated using the F-measure, sensitivity and specificity values. We found that the three strategies had similar performances for the selected antiretrovirals. For some cases, the stacking technique with naïve Bayes as the learning algorithm showed a statistically superior F-measure. This study demonstrates that ensemble classifiers can be an alternative tool for clinical decision-making since they provide a single resistance profile from the most commonly used resistance interpretation systems.

  13. Salinization of groundwater around underground LPG storage caverns, Korea : statistical interpretation

    NASA Astrophysics Data System (ADS)

    Lee, J.; Chang, H.

    2001-12-01

    In this research, we investigate the reciprocal influence between groundwater flow and its salinization occurred in two underground cavern sites, using major ion chemistry, PCA for chemical analysis data, and cross-correlation for various hydraulic data. The study areas are two underground LPG storage facilities constructed in South Sea coast, Yosu, and West Sea coastal regions, Pyeongtaek, Korea. Considerably high concentration of major cations and anions of groundwaters at both sites showed brackish or saline water types. In Yosu site, some great chemical difference of groundwater samples between rainy and dry season was caused by temporal intrusion of high-saline water into propane and butane cavern zone, but not in Pyeongtaek site. Cl/Br ratios and δ 18O- δ D distribution for tracing of salinization source water in both sites revealed that two kind of saline water (seawater and halite-dissolved solution) could influence the groundwater salinization in Yosu site, whereas only seawater intrusion could affect the groundwater chemistry of the observation wells in Pyeongtaek site. PCA performed by 8 and 10 chemical ions as statistical variables in both sites showed that intensive intrusion of seawater through butane cavern was occurred at Yosu site while seawater-groundwater mixing was observed at some observation wells located in the marginal part of Pyeongtaek site. Cross-correlation results revealed that the positive relationship between hydraulic head and cavern operating pressure was far more conspicuous at propane cavern zone in both sites (65 ~90% of correlation coefficients). According to the cross-correlation results of Yosu site, small change of head could provoke massive influx of halite-dissolved solution from surface through vertically developed fracture networks. However in Pyeongtaek site, the pressure-sensitive observation wells are not completely consistent with seawater-mixed wells, and the hydraulic change of heads at these wells related to the

  14. Evaluation of 12 strategies for obtaining second opinions to improve interpretation of breast histopathology: simulation study.

    PubMed

    Elmore, Joann G; Tosteson, Anna Na; Pepe, Margaret S; Longton, Gary M; Nelson, Heidi D; Geller, Berta; Carney, Patricia A; Onega, Tracy; Allison, Kimberly H; Jackson, Sara L; Weaver, Donald L

    2016-06-22

     To evaluate the potential effect of second opinions on improving the accuracy of diagnostic interpretation of breast histopathology.  Simulation study.  12 different strategies for acquiring independent second opinions.  Interpretations of 240 breast biopsy specimens by 115 pathologists, one slide for each case, compared with reference diagnoses derived by expert consensus.  Misclassification rates for individual pathologists and for 12 simulated strategies for second opinions. Simulations compared accuracy of diagnoses from single pathologists with that of diagnoses based on pairing interpretations from first and second independent pathologists, where resolution of disagreements was by an independent third pathologist. 12 strategies were evaluated in which acquisition of second opinions depended on initial diagnoses, assessment of case difficulty or borderline characteristics, pathologists' clinical volumes, or whether a second opinion was required by policy or desired by the pathologists. The 240 cases included benign without atypia (10% non-proliferative, 20% proliferative without atypia), atypia (30%), ductal carcinoma in situ (DCIS, 30%), and invasive cancer (10%). Overall misclassification rates and agreement statistics depended on the composition of the test set, which included a higher prevalence of difficult cases than in typical practice.  Misclassification rates significantly decreased (P<0.001) with all second opinion strategies except for the strategy limiting second opinions only to cases of invasive cancer. The overall misclassification rate decreased from 24.7% to 18.1% when all cases received second opinions (P<0.001). Obtaining both first and second opinions from pathologists with a high volume (≥10 breast biopsy specimens weekly) resulted in the lowest misclassification rate in this test set (14.3%, 95% confidence interval 10.9% to 18.0%). Obtaining second opinions only for cases with initial interpretations of atypia, DCIS, or invasive

  15. 7 CFR 2901.4 - Interpretations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 15 2011-01-01 2011-01-01 false Interpretations. 2901.4 Section 2901.4 Agriculture Regulations of the Department of Agriculture (Continued) OFFICE OF ENERGY POLICY AND NEW USES, DEPARTMENT OF... interpretation. A copy of the written interpretation shall be provided to FERC and the Secretary of Energy...

  16. 7 CFR 2901.4 - Interpretations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Interpretations. 2901.4 Section 2901.4 Agriculture Regulations of the Department of Agriculture (Continued) OFFICE OF ENERGY POLICY AND NEW USES, DEPARTMENT OF... interpretation. A copy of the written interpretation shall be provided to FERC and the Secretary of Energy...

  17. 7 CFR 2901.4 - Interpretations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 15 2014-01-01 2014-01-01 false Interpretations. 2901.4 Section 2901.4 Agriculture Regulations of the Department of Agriculture (Continued) OFFICE OF ENERGY POLICY AND NEW USES, DEPARTMENT OF... interpretation. A copy of the written interpretation shall be provided to FERC and the Secretary of Energy...

  18. 7 CFR 2901.4 - Interpretations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 15 2013-01-01 2013-01-01 false Interpretations. 2901.4 Section 2901.4 Agriculture Regulations of the Department of Agriculture (Continued) OFFICE OF ENERGY POLICY AND NEW USES, DEPARTMENT OF... interpretation. A copy of the written interpretation shall be provided to FERC and the Secretary of Energy...

  19. 7 CFR 2901.4 - Interpretations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 15 2012-01-01 2012-01-01 false Interpretations. 2901.4 Section 2901.4 Agriculture Regulations of the Department of Agriculture (Continued) OFFICE OF ENERGY POLICY AND NEW USES, DEPARTMENT OF... interpretation. A copy of the written interpretation shall be provided to FERC and the Secretary of Energy...

  20. Maps and interpretation of geochemical anomalies, Chuckwalla Mountains Wilderness Study Area, Riverside County, California

    USGS Publications Warehouse

    Watts, K.C.

    1986-01-01

    This report discusses and interprets geochemical results as they are seen at the reconnaissance stage. Analytical results for all samples collected are released in a U.S. Geological Survey Open-File Report (Adrian and others, 1985). A statistical summary of the data from heavy-mineral concentrates and sieved stream sediments is shown in table 1. The analytical results for selected elements in rock samples are shown in table 2.

  1. Results of the Verification of the Statistical Distribution Model of Microseismicity Emission Characteristics

    NASA Astrophysics Data System (ADS)

    Cianciara, Aleksander

    2016-09-01

    The paper presents the results of research aimed at verifying the hypothesis that the Weibull distribution is an appropriate statistical distribution model of microseismicity emission characteristics, namely: energy of phenomena and inter-event time. It is understood that the emission under consideration is induced by the natural rock mass fracturing. Because the recorded emission contain noise, therefore, it is subjected to an appropriate filtering. The study has been conducted using the method of statistical verification of null hypothesis that the Weibull distribution fits the empirical cumulative distribution function. As the model describing the cumulative distribution function is given in an analytical form, its verification may be performed using the Kolmogorov-Smirnov goodness-of-fit test. Interpretations by means of probabilistic methods require specifying the correct model describing the statistical distribution of data. Because in these methods measurement data are not used directly, but their statistical distributions, e.g., in the method based on the hazard analysis, or in that that uses maximum value statistics.

  2. Mid-infrared interferometry of AGNs: A statistical view into the dusty nuclear environment of the Seyfert Galaxies.

    NASA Astrophysics Data System (ADS)

    Lopez-Gonzaga, N.

    2015-09-01

    The high resolution achieved by the instrument MIDI at the VLTI allowed to obtain more detail information about the geometry and structure of the nuclear mid-infrared emission of AGNs, but due to the lack of real images, the interpretation of the results is not an easy task. To profit more from the high resolution data, we developed a statistical tool that allows interpret these data using clumpy torus models. A statistical approach is needed to overcome effects such as, the randomness in the position of the clouds and the uncertainty of the true position angle on the sky. Our results, obtained by studying the mid-infrared emission at the highest resolution currently available, suggest that the dusty environment of Type I objects is formed by a lower number of clouds than Type II objects.

  3. Statistics, Computation, and Modeling in Cosmology

    NASA Astrophysics Data System (ADS)

    Jewell, Jeff; Guiness, Joe; SAMSI 2016 Working Group in Cosmology

    2017-01-01

    Current and future ground and space based missions are designed to not only detect, but map out with increasing precision, details of the universe in its infancy to the present-day. As a result we are faced with the challenge of analyzing and interpreting observations from a wide variety of instruments to form a coherent view of the universe. Finding solutions to a broad range of challenging inference problems in cosmology is one of the goals of the “Statistics, Computation, and Modeling in Cosmology” workings groups, formed as part of the year long program on ‘Statistical, Mathematical, and Computational Methods for Astronomy’, hosted by the Statistical and Applied Mathematical Sciences Institute (SAMSI), a National Science Foundation funded institute. Two application areas have emerged for focused development in the cosmology working group involving advanced algorithmic implementations of exact Bayesian inference for the Cosmic Microwave Background, and statistical modeling of galaxy formation. The former includes study and development of advanced Markov Chain Monte Carlo algorithms designed to confront challenging inference problems including inference for spatial Gaussian random fields in the presence of sources of galactic emission (an example of a source separation problem). Extending these methods to future redshift survey data probing the nonlinear regime of large scale structure formation is also included in the working group activities. In addition, the working group is also focused on the study of ‘Galacticus’, a galaxy formation model applied to dark matter-only cosmological N-body simulations operating on time-dependent halo merger trees. The working group is interested in calibrating the Galacticus model to match statistics of galaxy survey observations; specifically stellar mass functions, luminosity functions, and color-color diagrams. The group will use subsampling approaches and fractional factorial designs to statistically and

  4. Easily overlooked sonographic findings in the evaluation of neonatal encephalopathy: lessons learned from magnetic resonance imaging.

    PubMed

    Dinan, David; Daneman, Alan; Guimaraes, Carolina V; Chauvin, Nancy A; Victoria, Teresa; Epelman, Monica

    2014-12-01

    Findings of neonatal encephalopathy (NE) and specifically those of hypoxic-ischemic injury are frequently evident on magnetic resonance imaging (MRI). Although MRI has become more widely used and has gained widespread acceptance as the study of choice for the evaluation of NE in recent years, its costs are high and access to MRI is sometimes limited for extremely sick neonates. Therefore, head sonography (US) continues to be the first-line imaging modality for the evaluation of the brain in neonates with NE; furthermore, in many of these infants, the diagnosis of NE may have first been made or suggested using head US. US is noninvasive, inexpensive, and portable, allowing examinations to be performed without moving the infant. However, many of the telltale signs of NE on US are subtle and may be easily overlooked, contributing to diagnostic delay or misdiagnosis. We aim to illustrate the spectrum of US findings in NE, with emphasis on those findings that may be easily overlooked on US. Recognition of these findings could potentially improve detection rates, reduce errors, and improve patient management. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. 10 CFR 9.5 - Interpretations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Interpretations. 9.5 Section 9.5 Energy NUCLEAR REGULATORY COMMISSION PUBLIC RECORDS § 9.5 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in this part by an officer or employee of the...

  6. 10 CFR 9.5 - Interpretations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Interpretations. 9.5 Section 9.5 Energy NUCLEAR REGULATORY COMMISSION PUBLIC RECORDS § 9.5 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in this part by an officer or employee of the...

  7. 10 CFR 9.5 - Interpretations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Interpretations. 9.5 Section 9.5 Energy NUCLEAR REGULATORY COMMISSION PUBLIC RECORDS § 9.5 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in this part by an officer or employee of the...

  8. 10 CFR 7.3 - Interpretations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Interpretations. 7.3 Section 7.3 Energy NUCLEAR REGULATORY COMMISSION ADVISORY COMMITTEES § 7.3 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in this part by an NRC officer or employee...

  9. 10 CFR 7.3 - Interpretations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Interpretations. 7.3 Section 7.3 Energy NUCLEAR REGULATORY COMMISSION ADVISORY COMMITTEES § 7.3 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in this part by an NRC officer or employee...

  10. 10 CFR 7.3 - Interpretations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Interpretations. 7.3 Section 7.3 Energy NUCLEAR REGULATORY COMMISSION ADVISORY COMMITTEES § 7.3 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in this part by an NRC officer or employee...

  11. A History of Oral Interpretation.

    ERIC Educational Resources Information Center

    Bahn, Eugene; Bahn, Margaret L.

    This historical account of the oral interpretation of literature establishes a chain of events comprehending 25 centuries of verbal tradition from the Homeric Age through 20th Century America. It deals in each era with the viewpoints and contributions of major historical figures to oral interpretation, as well as with oral interpretation's…

  12. 10 CFR 7.3 - Interpretations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Interpretations. 7.3 Section 7.3 Energy NUCLEAR REGULATORY COMMISSION ADVISORY COMMITTEES § 7.3 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in this part by an NRC officer or employee...

  13. 10 CFR 9.5 - Interpretations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Interpretations. 9.5 Section 9.5 Energy NUCLEAR REGULATORY COMMISSION PUBLIC RECORDS § 9.5 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in this part by an officer or employee of the...

  14. 10 CFR 7.3 - Interpretations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Interpretations. 7.3 Section 7.3 Energy NUCLEAR REGULATORY COMMISSION ADVISORY COMMITTEES § 7.3 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in this part by an NRC officer or employee...

  15. 10 CFR 9.5 - Interpretations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Interpretations. 9.5 Section 9.5 Energy NUCLEAR REGULATORY COMMISSION PUBLIC RECORDS § 9.5 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the regulations in this part by an officer or employee of the...

  16. The Optimum Text in Simultaneous Interpreting: A Cognitive Approach to Interpreters' Training.

    ERIC Educational Resources Information Center

    Alexieva, Bistra

    A discussion of text translatability in simultaneous interpreting (SI) looks at semantic redundancy, the repetition of semantic components essential to creating an utterance, and offers some classroom techniques for teaching interpreting skills. It is proposed that the translatability of a text in SI should be studied in terms of the experiential…

  17. The effectiveness of nurses' ability to interpret basic electrocardiogram strips accurately using different learning modalities.

    PubMed

    Spiva, LeeAnna; Johnson, Kimberly; Robertson, Bethany; Barrett, Darcy T; Jarrell, Nicole M; Hunter, Donna; Mendoza, Inocencia

    2012-02-01

    Historically, the instructional method of choice has been traditional lecture or face-to-face education; however, changes in the health care environment, including resource constraints, have necessitated examination of this practice. A descriptive pre-/posttest method was used to determine the effectiveness of alternative teaching modalities on nurses' knowledge and confidence in electrocardiogram (EKG) interpretation. A convenience sample of 135 nurses was recruited in an integrated health care system in the Southeastern United States. Nurses attended an instructor-led course, an online learning (e-learning) platform with no study time or 1 week of study time, or an e-learning platform coupled with a 2-hour post-course instructor-facilitated debriefing with no study time or 1 week of study time. Instruments included a confidence scale, an online EKG test, and a course evaluation. Statistically significant differences in knowledge and confidence were found for individual groups after nurses participated in the intervention. Statistically significant differences were found in pre-knowledge and post-confidence when groups were compared. Organizations that use various instructional methods to educate nurses in EKG interpretation can use different teaching modalities without negatively affecting nurses' knowledge or confidence in this skill. Copyright 2012, SLACK Incorporated.

  18. Application of Demand-Control Theory to Sign Language Interpreting: Implications for Stress and Interpreter Training.

    ERIC Educational Resources Information Center

    Dean, Robyn K.; Pollard, Robert Q., Jr.

    2001-01-01

    This article uses the framework of demand-control theory to examine the occupation of sign language interpreting. It discusses the environmental, interpersonal, and intrapersonal demands that impinge on the interpreter's decision latitude and notes the prevalence of cumulative trauma disorders, turnover, and burnout in the interpreting profession.…

  19. Functional programming interpreter. M. S. thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robison, A.D.

    1987-03-01

    Functional Programming (FP) sup BAC87 is an alternative to conventional imperative programming languages. This thesis describes an FP interpreter implementation. Superficially, FP appears to be a simple, but very inefficient language. Its simplicity, however, allows it to be interpreted quickly. Much of the inefficiency can be removed by simple interpreter techniques. This thesis describes the Illinois Functional Programming (IFP) interpreter, an interactive functional programming implementation which runs under both MS-DOS and UNIX. The IFP interpreter allows functions to be created, executed, and debugged in an environment very similar to UNIX. IFP's speed is competitive with other interpreted languages such asmore » BASIC.« less

  20. A highly versatile and easily configurable system for plant electrophysiology.

    PubMed

    Gunsé, Benet; Poschenrieder, Charlotte; Rankl, Simone; Schröeder, Peter; Rodrigo-Moreno, Ana; Barceló, Juan

    2016-01-01

    In this study we present a highly versatile and easily configurable system for measuring plant electrophysiological parameters and ionic flow rates, connected to a computer-controlled highly accurate positioning device. The modular software used allows easy customizable configurations for the measurement of electrophysiological parameters. Both the operational tests and the experiments already performed have been fully successful and rendered a low noise and highly stable signal. Assembly, programming and configuration examples are discussed. The system is a powerful technique that not only gives precise measuring of plant electrophysiological status, but also allows easy development of ad hoc configurations that are not constrained to plant studies. •We developed a highly modular system for electrophysiology measurements that can be used either in organs or cells and performs either steady or dynamic intra- and extracellular measurements that takes advantage of the easiness of visual object-oriented programming.•High precision accuracy in data acquisition under electrical noisy environments that allows it to run even in a laboratory close to electrical equipment that produce electrical noise.•The system makes an improvement of the currently used systems for monitoring and controlling high precision measurements and micromanipulation systems providing an open and customizable environment for multiple experimental needs.

  1. The Interpretation of Mg/Ca in Ostracode Valves: Biokinetic vs. Thermodynamic Controls

    NASA Astrophysics Data System (ADS)

    Dettman, D. L.; Palacios-Fest, M. R.; Cohen, A. S.

    2004-12-01

    The geochemistry of the calcite valves of ostracodes (a group of micro-crustacean) is often used to reconstruct the history of aqueous environments in both marine and fresh-water settings. These benthic animals can be very abundant in lakes and ponds and their low-Mg calcite valves are easily recovered from sediment cores. Many studies have used minor-element ratios (Mg/Ca and Sr/Ca) as indicators of temperature and/or salinity change through time and numerous calibration studies have been undertaken. There is considerable disagreement on the interpretation of both historical data and calibration studies because of differing views on what controls elemental ratios in ostracode valves. Here we focus on Mg/Ca ratios and critique the dominant assumption that Mg/Ca ratios in ostracode calcite are interpretable as a temperature-dependant distribution (or partition) coefficient. The use of a distribution coefficient, usually defined as a ratio of shell-to-water Mg/Ca ratios, assumes that the ratio in the water plays a significant role in the resultant ratio in the shell. Ostracode biomineralization is most commonly viewed as equivalent to inorganic precipitation of low-Mg calcite from solution, a system in which distribution coefficients are probably valid models. However, a re-examination of published studies shows that in many cases Mg/Ca(water) has no statistically demonstrable affect on the Mg/Ca ratio of ostracode valve calcite. The valve Mg/Ca ratio is most often a function of ambient temperature. In a number of studies the importance of the water's Mg/Ca ratio cannot be determined due to auto-correlation with other environmental factors. This implies that there is considerable biological control on the minor element chemistry of the ostracode valve. This is supported by a number of observations: valve calcification is rapid and initiated by the animal; Mg/Ca ratios within the valve vary greatly on a microscopic scale; the earliest carbonate formed during

  2. Cutting Corners: Provider Perceptions of Interpretation Services and Factors Related to Use of an Ad Hoc Interpreter.

    PubMed

    Mayo, Rachel; Parker, Veronica G; Sherrill, Windsor W; Coltman, Kinneil; Hudson, Matthew F; Nichols, Christina M; Yates, Adam M; Pribonic, Anne Paige

    2016-06-01

    This study assessed health providers' perceptions of factors related to professional interpretation services and the association between these factors and the potential use of ad hoc interpreters. Data were collected from a convenience sample of 150 health services providers at a large, regional health system in South Carolina. Providers rated "ability to communicate effectively during a clinical encounter" as paramount regarding the use of interpretation services. The most important factors related to the likely use of ad hoc interpreters (cutting corners) included locating a qualified interpreter, having to wait for a qualified interpreter, and technical difficulties regarding phone and video technology. Health care organizations may benefit from increasing staff awareness about patient safety and legal and regulatory risks involved with the use of ad hoc interpreters. © The Author(s) 2016.

  3. Informal interpreting in general practice: Are interpreters' roles related to perceived control, trust, and satisfaction?

    PubMed

    Zendedel, Rena; Schouten, Barbara C; van Weert, Julia C M; van den Putte, Bas

    2018-06-01

    The aim of this observational study was twofold. First, we examined how often and which roles informal interpreters performed during consultations between Turkish-Dutch migrant patients and general practitioners (GPs). Second, relations between these roles and patients' and GPs' perceived control, trust in informal interpreters and satisfaction with the consultation were assessed. A coding instrument was developed to quantitatively code informal interpreters' roles from transcripts of 84 audio-recorded interpreter-mediated consultations in general practice. Patients' and GPs' perceived control, trust and satisfaction were assessed in a post consultation questionnaire. Informal interpreters most often performed the conduit role (almost 25% of all coded utterances), and also frequently acted as replacers and excluders of patients and GPs by asking and answering questions on their own behalf, and by ignoring and omitting patients' and GPs' utterances. The role of information source was negatively related to patients' trust and the role of GP excluder was negatively related to patients' perceived control. Patients and GPs are possibly insufficiently aware of the performed roles of informal interpreters, as these were barely related to patients' and GPs' perceived trust, control and satisfaction. Patients and GPs should be educated about the possible negative consequences of informal interpreting. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Statistical Neurodynamics.

    NASA Astrophysics Data System (ADS)

    Paine, Gregory Harold

    1982-03-01

    The primary objective of the thesis is to explore the dynamical properties of small nerve networks by means of the methods of statistical mechanics. To this end, a general formalism is developed and applied to elementary groupings of model neurons which are driven by either constant (steady state) or nonconstant (nonsteady state) forces. Neuronal models described by a system of coupled, nonlinear, first-order, ordinary differential equations are considered. A linearized form of the neuronal equations is studied in detail. A Lagrange function corresponding to the linear neural network is constructed which, through a Legendre transformation, provides a constant of motion. By invoking the Maximum-Entropy Principle with the single integral of motion as a constraint, a probability distribution function for the network in a steady state can be obtained. The formalism is implemented for some simple networks driven by a constant force; accordingly, the analysis focuses on a study of fluctuations about the steady state. In particular, a network composed of N noninteracting neurons, termed Free Thinkers, is considered in detail, with a view to interpretation and numerical estimation of the Lagrange multiplier corresponding to the constant of motion. As an archetypical example of a net of interacting neurons, the classical neural oscillator, consisting of two mutually inhibitory neurons, is investigated. It is further shown that in the case of a network driven by a nonconstant force, the Maximum-Entropy Principle can be applied to determine a probability distribution functional describing the network in a nonsteady state. The above examples are reconsidered with nonconstant driving forces which produce small deviations from the steady state. Numerical studies are performed on simplified models of two physical systems: the starfish central nervous system and the mammalian olfactory bulb. Discussions are given as to how statistical neurodynamics can be used to gain a better

  5. Validity threats: overcoming interference with proposed interpretations of assessment data.

    PubMed

    Downing, Steven M; Haladyna, Thomas M

    2004-03-01

    Factors that interfere with the ability to interpret assessment scores or ratings in the proposed manner threaten validity. To be interpreted in a meaningful manner, all assessments in medical education require sound, scientific evidence of validity. The purpose of this essay is to discuss 2 major threats to validity: construct under-representation (CU) and construct-irrelevant variance (CIV). Examples of each type of threat for written, performance and clinical performance examinations are provided. The CU threat to validity refers to undersampling the content domain. Using too few items, cases or clinical performance observations to adequately generalise to the domain represents CU. Variables that systematically (rather than randomly) interfere with the ability to meaningfully interpret scores or ratings represent CIV. Issues such as flawed test items written at inappropriate reading levels or statistically biased questions represent CIV in written tests. For performance examinations, such as standardised patient examinations, flawed cases or cases that are too difficult for student ability contribute CIV to the assessment. For clinical performance data, systematic rater error, such as halo or central tendency error, represents CIV. The term face validity is rejected as representative of any type of legitimate validity evidence, although the fact that the appearance of the assessment may be an important characteristic other than validity is acknowledged. There are multiple threats to validity in all types of assessment in medical education. Methods to eliminate or control validity threats are suggested.

  6. Environmental statistics and optimal regulation.

    PubMed

    Sivak, David A; Thomson, Matt

    2014-09-01

    Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies--such as constitutive expression or graded response--for regulating protein levels in response to environmental inputs. We propose a general framework-here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient-to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.

  7. Environmental Statistics and Optimal Regulation

    PubMed Central

    2014-01-01

    Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies–such as constitutive expression or graded response–for regulating protein levels in response to environmental inputs. We propose a general framework–here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient–to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones. PMID:25254493

  8. Reproducibility-optimized test statistic for ranking genes in microarray studies.

    PubMed

    Elo, Laura L; Filén, Sanna; Lahesmaa, Riitta; Aittokallio, Tero

    2008-01-01

    A principal goal of microarray studies is to identify the genes showing differential expression under distinct conditions. In such studies, the selection of an optimal test statistic is a crucial challenge, which depends on the type and amount of data under analysis. While previous studies on simulated or spike-in datasets do not provide practical guidance on how to choose the best method for a given real dataset, we introduce an enhanced reproducibility-optimization procedure, which enables the selection of a suitable gene- anking statistic directly from the data. In comparison with existing ranking methods, the reproducibilityoptimized statistic shows good performance consistently under various simulated conditions and on Affymetrix spike-in dataset. Further, the feasibility of the novel statistic is confirmed in a practical research setting using data from an in-house cDNA microarray study of asthma-related gene expression changes. These results suggest that the procedure facilitates the selection of an appropriate test statistic for a given dataset without relying on a priori assumptions, which may bias the findings and their interpretation. Moreover, the general reproducibilityoptimization procedure is not limited to detecting differential expression only but could be extended to a wide range of other applications as well.

  9. Interpretation of scrape-off layer profile evolution and first-wall ion flux statistics on JET using a stochastic framework based on fillamentary motion

    NASA Astrophysics Data System (ADS)

    Walkden, N. R.; Wynn, A.; Militello, F.; Lipschultz, B.; Matthews, G.; Guillemaut, C.; Harrison, J.; Moulton, D.; Contributors, JET

    2017-08-01

    This paper presents the use of a novel modelling technique based around intermittent transport due to filament motion, to interpret experimental profile and fluctuation data in the scrape-off layer (SOL) of JET during the onset and evolution of a density profile shoulder. A baseline case is established, prior to shoulder formation, and the stochastic model is shown to be capable of simultaneously matching the time averaged profile measurement as well as the PDF shape and autocorrelation function from the ion-saturation current time series at the outer wall. Aspects of the stochastic model are then varied with the aim of producing a profile shoulder with statistical measurements consistent with experiment. This is achieved through a strong localised reduction in the density sink acting on the filaments within the model. The required reduction of the density sink occurs over a highly localised region with the timescale of the density sink increased by a factor of 25. This alone is found to be insufficient to model the expansion and flattening of the shoulder region as the density increases, which requires additional changes within the stochastic model. An example is found which includes both a reduction in the density sink and filament acceleration and provides a consistent match to the experimental data as the shoulder expands, though the uniqueness of this solution can not be guaranteed. Within the context of the stochastic model, this implies that the localised reduction in the density sink can trigger shoulder formation, but additional physics is required to explain the subsequent evolution of the profile.

  10. Interpretive versus noninterpretive content in top-selling radiology textbooks: what are we teaching medical students?

    PubMed

    Webb, Emily M; Vella, Maya; Straus, Christopher M; Phelps, Andrew; Naeger, David M

    2015-04-01

    There are little data as to whether appropriate, cost effective, and safe ordering of imaging examinations are adequately taught in US medical school curricula. We sought to determine the proportion of noninterpretive content (such as appropriate ordering) versus interpretive content (such as reading a chest x-ray) in the top-selling medical student radiology textbooks. We performed an online search to identify a ranked list of the six top-selling general radiology textbooks for medical students. Each textbook was reviewed including content in the text, tables, images, figures, appendices, practice questions, question explanations, and glossaries. Individual pages of text and individual images were semiquantitatively scored on a six-level scale as to the percentage of material that was interpretive versus noninterpretive. The predominant imaging modality addressed in each was also recorded. Descriptive statistical analysis was performed. All six books had more interpretive content. On average, 1.4 pages of text focused on interpretation for every one page focused on noninterpretive content. Seventeen images/figures were dedicated to interpretive skills for every one focused on noninterpretive skills. In all books, the largest proportion of text and image content was dedicated to plain films (51.2%), with computed tomography (CT) a distant second (16%). The content on radiographs (3.1:1) and CT (1.6:1) was more interpretive than not. The current six top-selling medical student radiology textbooks contain a preponderance of material teaching image interpretation compared to material teaching noninterpretive skills, such as appropriate imaging examination selection, rational utilization, and patient safety. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  11. Pixel Statistical Analysis of Diabetic vs. Non-diabetic Foot-Sole Spectral Terahertz Reflection Images

    NASA Astrophysics Data System (ADS)

    Hernandez-Cardoso, G. G.; Alfaro-Gomez, M.; Rojas-Landeros, S. C.; Salas-Gutierrez, I.; Castro-Camus, E.

    2018-03-01

    In this article, we present a series of hydration mapping images of the foot soles of diabetic and non-diabetic subjects measured by terahertz reflectance. In addition to the hydration images, we present a series of RYG-color-coded (red yellow green) images where pixels are assigned one of the three colors in order to easily identify areas in risk of ulceration. We also present the statistics of the number of pixels with each color as a potential quantitative indicator for diabetic foot-syndrome deterioration.

  12. Learning Predictive Statistics: Strategies and Brain Mechanisms.

    PubMed

    Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E; Kourtzi, Zoe

    2017-08-30

    When immersed in a new environment, we are challenged to decipher initially incomprehensible streams of sensory information. However, quite rapidly, the brain finds structure and meaning in these incoming signals, helping us to predict and prepare ourselves for future actions. This skill relies on extracting the statistics of event streams in the environment that contain regularities of variable complexity from simple repetitive patterns to complex probabilistic combinations. Here, we test the brain mechanisms that mediate our ability to adapt to the environment's statistics and predict upcoming events. By combining behavioral training and multisession fMRI in human participants (male and female), we track the corticostriatal mechanisms that mediate learning of temporal sequences as they change in structure complexity. We show that learning of predictive structures relates to individual decision strategy; that is, selecting the most probable outcome in a given context (maximizing) versus matching the exact sequence statistics. These strategies engage distinct human brain regions: maximizing engages dorsolateral prefrontal, cingulate, sensory-motor regions, and basal ganglia (dorsal caudate, putamen), whereas matching engages occipitotemporal regions (including the hippocampus) and basal ganglia (ventral caudate). Our findings provide evidence for distinct corticostriatal mechanisms that facilitate our ability to extract behaviorally relevant statistics to make predictions. SIGNIFICANCE STATEMENT Making predictions about future events relies on interpreting streams of information that may initially appear incomprehensible. Past work has studied how humans identify repetitive patterns and associative pairings. However, the natural environment contains regularities that vary in complexity from simple repetition to complex probabilistic combinations. Here, we combine behavior and multisession fMRI to track the brain mechanisms that mediate our ability to adapt to

  13. Invited Commentary: Antecedents of Obesity—Analysis, Interpretation, and Use of Longitudinal Data

    PubMed Central

    Gillman, Matthew W.; Kleinman, Ken

    2007-01-01

    The obesity epidemic causes misery and death. Most epidemiologists accept the hypothesis that characteristics of the early stages of human development have lifelong influences on obesity-related health outcomes. Unfortunately, there is a dearth of data of sufficient scope and individual history to help unravel the associations of prenatal, postnatal, and childhood factors with adult obesity and health outcomes. Here the authors discuss analytic methods, the interpretation of models, and the use to which such rare and valuable data may be put in developing interventions to combat the epidemic. For example, analytic methods such as quantile and multinomial logistic regression can describe the effects on body mass index range rather than just its mean; structural equation models may allow comparison of the contributions of different factors at different periods in the life course. Interpretation of the data and model construction is complex, and it requires careful consideration of the biologic plausibility and statistical interpretation of putative causal factors. The goals of discovering modifiable determinants of obesity during the prenatal, postnatal, and childhood periods must be kept in sight, and analyses should be built to facilitate them. Ultimately, interventions in these factors may help prevent obesity-related adverse health outcomes for future generations. PMID:17490988

  14. Applications of Bayesian Statistics to Problems in Gamma-Ray Bursts

    NASA Technical Reports Server (NTRS)

    Meegan, Charles A.

    1997-01-01

    This presentation will describe two applications of Bayesian statistics to Gamma Ray Bursts (GRBS). The first attempts to quantify the evidence for a cosmological versus galactic origin of GRBs using only the observations of the dipole and quadrupole moments of the angular distribution of bursts. The cosmological hypothesis predicts isotropy, while the galactic hypothesis is assumed to produce a uniform probability distribution over positive values for these moments. The observed isotropic distribution indicates that the Bayes factor for the cosmological hypothesis over the galactic hypothesis is about 300. Another application of Bayesian statistics is in the estimation of chance associations of optical counterparts with galaxies. The Bayesian approach is preferred to frequentist techniques here because the Bayesian approach easily accounts for galaxy mass distributions and because one can incorporate three disjoint hypotheses: (1) bursts come from galactic centers, (2) bursts come from galaxies in proportion to luminosity, and (3) bursts do not come from external galaxies. This technique was used in the analysis of the optical counterpart to GRB970228.

  15. The Easily Learned, Easily Remembered Heuristic in Children

    ERIC Educational Resources Information Center

    Koriat, Asher; Ackerman, Rakefet; Lockl, Kathrin; Schneider, Wolfgang

    2009-01-01

    A previous study with adults [Koriat, A. (2008a). "Easy comes, easy goes? The link between learning and remembering and its exploitation in metacognition." "Memory & Cognition," 36, 416-428] established a correlation between learning and remembering: items requiring more trials to acquisition (TTA) were less likely to be recalled than those…

  16. Interpretable dimensionality reduction of single cell transcriptome data with deep generative models.

    PubMed

    Ding, Jiarui; Condon, Anne; Shah, Sohrab P

    2018-05-21

    Single-cell RNA-sequencing has great potential to discover cell types, identify cell states, trace development lineages, and reconstruct the spatial organization of cells. However, dimension reduction to interpret structure in single-cell sequencing data remains a challenge. Existing algorithms are either not able to uncover the clustering structures in the data or lose global information such as groups of clusters that are close to each other. We present a robust statistical model, scvis, to capture and visualize the low-dimensional structures in single-cell gene expression data. Simulation results demonstrate that low-dimensional representations learned by scvis preserve both the local and global neighbor structures in the data. In addition, scvis is robust to the number of data points and learns a probabilistic parametric mapping function to add new data points to an existing embedding. We then use scvis to analyze four single-cell RNA-sequencing datasets, exemplifying interpretable two-dimensional representations of the high-dimensional single-cell RNA-sequencing data.

  17. The role of interpreters in inclusive classrooms.

    PubMed

    Antia, S D; Kreimeyer, K H

    2001-10-01

    The roles of interpreters in an inclusive classroom were examined through a qualitative, 3-year case study of three interpreters in an inclusive school. Interviews were conducted with interpreters, classroom teachers, special education teachers, and administrators. The interview data were supplemented with observations and field notes. Results indicate that in addition to sign interpreting between American Sign Language and speech, the interpreters clarified teacher directions, facilitated peer interaction, tutored the deaf children, and kept the teachers and special educators informed of the deaf children's progress. The interpreter/aides and the classroom teachers preferred this full-participant interpreter role, while the special educators and administrators preferred a translator role. Classroom teachers were more comfortable with full-time interpreters who knew the classroom routine, while the special educators and administrators feared that full-time interpreters fostered child and teacher dependence. These issues are discussed in terms of congruence with the Registry of Interpreters code of ethics and how integration of young children might be best facilitated.

  18. Descriptive and inferential statistical methods used in burns research.

    PubMed

    Al-Benna, Sammy; Al-Ajam, Yazan; Way, Benjamin; Steinstraesser, Lars

    2010-05-01

    Burns research articles utilise a variety of descriptive and inferential methods to present and analyse data. The aim of this study was to determine the descriptive methods (e.g. mean, median, SD, range, etc.) and survey the use of inferential methods (statistical tests) used in articles in the journal Burns. This study defined its population as all original articles published in the journal Burns in 2007. Letters to the editor, brief reports, reviews, and case reports were excluded. Study characteristics, use of descriptive statistics and the number and types of statistical methods employed were evaluated. Of the 51 articles analysed, 11(22%) were randomised controlled trials, 18(35%) were cohort studies, 11(22%) were case control studies and 11(22%) were case series. The study design and objectives were defined in all articles. All articles made use of continuous and descriptive data. Inferential statistics were used in 49(96%) articles. Data dispersion was calculated by standard deviation in 30(59%). Standard error of the mean was quoted in 19(37%). The statistical software product was named in 33(65%). Of the 49 articles that used inferential statistics, the tests were named in 47(96%). The 6 most common tests used (Student's t-test (53%), analysis of variance/co-variance (33%), chi(2) test (27%), Wilcoxon & Mann-Whitney tests (22%), Fisher's exact test (12%)) accounted for the majority (72%) of statistical methods employed. A specified significance level was named in 43(88%) and the exact significance levels were reported in 28(57%). Descriptive analysis and basic statistical techniques account for most of the statistical tests reported. This information should prove useful in deciding which tests should be emphasised in educating burn care professionals. These results highlight the need for burn care professionals to have a sound understanding of basic statistics, which is crucial in interpreting and reporting data. Advice should be sought from professionals

  19. Radiological interpretation of images displayed on tablet computers: a systematic review

    PubMed Central

    Armfield, N R; Smith, A C

    2015-01-01

    Objective: To review the published evidence and to determine if radiological diagnostic accuracy is compromised when images are displayed on a tablet computer and thereby inform practice on using tablet computers for radiological interpretation by on-call radiologists. Methods: We searched the PubMed and EMBASE databases for studies on the diagnostic accuracy or diagnostic reliability of images interpreted on tablet computers. Studies were screened for inclusion based on pre-determined inclusion and exclusion criteria. Studies were assessed for quality and risk of bias using Quality Appraisal of Diagnostic Reliability Studies or the revised Quality Assessment of Diagnostic Accuracy Studies tool. Treatment of studies was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Results: 11 studies met the inclusion criteria. 10 of these studies tested the Apple iPad® (Apple, Cupertino, CA). The included studies reported high sensitivity (84–98%), specificity (74–100%) and accuracy rates (98–100%) for radiological diagnosis. There was no statistically significant difference in accuracy between a tablet computer and a digital imaging and communication in medicine-calibrated control display. There was a near complete consensus from authors on the non-inferiority of diagnostic accuracy of images displayed on a tablet computer. All of the included studies were judged to be at risk of bias. Conclusion: Our findings suggest that the diagnostic accuracy of radiological interpretation is not compromised by using a tablet computer. This result is only relevant to the Apple iPad and to the modalities of CT, MRI and plain radiography. Advances in knowledge: The iPad may be appropriate for an on-call radiologist to use for radiological interpretation. PMID:25882691

  20. Radiological interpretation of images displayed on tablet computers: a systematic review.

    PubMed

    Caffery, L J; Armfield, N R; Smith, A C

    2015-06-01

    To review the published evidence and to determine if radiological diagnostic accuracy is compromised when images are displayed on a tablet computer and thereby inform practice on using tablet computers for radiological interpretation by on-call radiologists. We searched the PubMed and EMBASE databases for studies on the diagnostic accuracy or diagnostic reliability of images interpreted on tablet computers. Studies were screened for inclusion based on pre-determined inclusion and exclusion criteria. Studies were assessed for quality and risk of bias using Quality Appraisal of Diagnostic Reliability Studies or the revised Quality Assessment of Diagnostic Accuracy Studies tool. Treatment of studies was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). 11 studies met the inclusion criteria. 10 of these studies tested the Apple iPad(®) (Apple, Cupertino, CA). The included studies reported high sensitivity (84-98%), specificity (74-100%) and accuracy rates (98-100%) for radiological diagnosis. There was no statistically significant difference in accuracy between a tablet computer and a digital imaging and communication in medicine-calibrated control display. There was a near complete consensus from authors on the non-inferiority of diagnostic accuracy of images displayed on a tablet computer. All of the included studies were judged to be at risk of bias. Our findings suggest that the diagnostic accuracy of radiological interpretation is not compromised by using a tablet computer. This result is only relevant to the Apple iPad and to the modalities of CT, MRI and plain radiography. The iPad may be appropriate for an on-call radiologist to use for radiological interpretation.