Sample records for descriptive statistics based

  1. Biostatistics primer: part I.

    PubMed

    Overholser, Brian R; Sowinski, Kevin M

    2007-12-01

    Biostatistics is the application of statistics to biologic data. The field of statistics can be broken down into 2 fundamental parts: descriptive and inferential. Descriptive statistics are commonly used to categorize, display, and summarize data. Inferential statistics can be used to make predictions based on a sample obtained from a population or some large body of information. It is these inferences that are used to test specific research hypotheses. This 2-part review will outline important features of descriptive and inferential statistics as they apply to commonly conducted research studies in the biomedical literature. Part 1 in this issue will discuss fundamental topics of statistics and data analysis. Additionally, some of the most commonly used statistical tests found in the biomedical literature will be reviewed in Part 2 in the February 2008 issue.

  2. Longitudinal Assessment of Self-Reported Recent Back Pain and Combat Deployment in the Millennium Cohort Study

    DTIC Science & Technology

    2016-11-15

    participants who were followed for the development of back pain for an average of 3.9 years. Methods. Descriptive statistics and longitudinal...health, military personnel, occupational health, outcome assessment, statistics, survey methodology . Level of Evidence: 3 Spine 2016;41:1754–1763ack...based on the National Health and Nutrition Examination Survey.21 Statistical Analysis Descriptive and univariate analyses compared character- istics

  3. Evidence-based orthodontics. Current statistical trends in published articles in one journal.

    PubMed

    Law, Scott V; Chudasama, Dipak N; Rinchuse, Donald J

    2010-09-01

    To ascertain the number, type, and overall usage of statistics in American Journal of Orthodontics and Dentofacial (AJODO) articles for 2008. These data were then compared to data from three previous years: 1975, 1985, and 2003. The frequency and distribution of statistics used in the AJODO original articles for 2008 were dichotomized into those using statistics and those not using statistics. Statistical procedures were then broadly divided into descriptive statistics (mean, standard deviation, range, percentage) and inferential statistics (t-test, analysis of variance). Descriptive statistics were used to make comparisons. In 1975, 1985, 2003, and 2008, AJODO published 72, 87, 134, and 141 original articles, respectively. The percentage of original articles using statistics was 43.1% in 1975, 75.9% in 1985, 94.0% in 2003, and 92.9% in 2008; original articles using statistics stayed relatively the same from 2003 to 2008, with only a small 1.1% decrease. The percentage of articles using inferential statistical analyses was 23.7% in 1975, 74.2% in 1985, 92.9% in 2003, and 84.4% in 2008. Comparing AJODO publications in 2003 and 2008, there was an 8.5% increase in the use of descriptive articles (from 7.1% to 15.6%), and there was an 8.5% decrease in articles using inferential statistics (from 92.9% to 84.4%).

  4. Teaching Statistics Using Classic Psychology Research: An Activities-Based Approach

    ERIC Educational Resources Information Center

    Holmes, Karen Y.; Dodd, Brett A.

    2012-01-01

    In this article, we discuss a collection of active learning activities derived from classic psychology studies that illustrate the appropriate use of descriptive and inferential statistics. (Contains 2 tables.)

  5. A Technology-Based Statistical Reasoning Assessment Tool in Descriptive Statistics for Secondary School Students

    ERIC Educational Resources Information Center

    Chan, Shiau Wei; Ismail, Zaleha

    2014-01-01

    The focus of assessment in statistics has gradually shifted from traditional assessment towards alternative assessment where more attention has been paid to the core statistical concepts such as center, variability, and distribution. In spite of this, there are comparatively few assessments that combine the significant three types of statistical…

  6. Statistical representation of multiphase flow

    NASA Astrophysics Data System (ADS)

    Subramaniam

    2000-11-01

    The relationship between two common statistical representations of multiphase flow, namely, the single--point Eulerian statistical representation of two--phase flow (D. A. Drew, Ann. Rev. Fluid Mech. (15), 1983), and the Lagrangian statistical representation of a spray using the dropet distribution function (F. A. Williams, Phys. Fluids 1 (6), 1958) is established for spherical dispersed--phase elements. This relationship is based on recent work which relates the droplet distribution function to single--droplet pdfs starting from a Liouville description of a spray (Subramaniam, Phys. Fluids 10 (12), 2000). The Eulerian representation, which is based on a random--field model of the flow, is shown to contain different statistical information from the Lagrangian representation, which is based on a point--process model. The two descriptions are shown to be simply related for spherical, monodisperse elements in statistically homogeneous two--phase flow, whereas such a simple relationship is precluded by the inclusion of polydispersity and statistical inhomogeneity. The common origin of these two representations is traced to a more fundamental statistical representation of a multiphase flow, whose concepts derive from a theory for dense sprays recently proposed by Edwards (Atomization and Sprays 10 (3--5), 2000). The issue of what constitutes a minimally complete statistical representation of a multiphase flow is resolved.

  7. Coupling Changing Student Demographics with Evidence-Based Leadership Practices: Leading Hispanic Friendly Learning Organizations

    ERIC Educational Resources Information Center

    Farmer, Tod Allen

    2012-01-01

    The study assessed the need for learning organizations to implement evidence-based policies and practices designed to enhance the academic and social success of Hispanic learners. Descriptive statistics and longitudinal data from the National Center for Educational Statistics (NCES) and the National Clearinghouse for English Language Acquisition…

  8. Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.

    ERIC Educational Resources Information Center

    Ojeda, Mario Miguel; Sahai, Hardeo

    2002-01-01

    Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…

  9. Description of the Role of Shot Noise in Spectroscopic Absorption and Emission Measurements with Photodiode and Photomultiplier Tube Detectors: Information for an Instrumental Analysis Course

    ERIC Educational Resources Information Center

    McClain, Robert L.; Wright, John C.

    2014-01-01

    A description of shot noise and the role it plays in absorption and emission measurements using photodiode and photomultiplier tube detection systems is presented. This description includes derivations of useful forms of the shot noise equation based on Poisson counting statistics. This approach can deepen student understanding of a fundamental…

  10. Pupil Size in Outdoor Environments

    DTIC Science & Technology

    2007-04-06

    studies. .........................19 Table 3: Descriptive statistics for pupils measured over luminance range. .........50 Table 4: N in each...strata for all pupil measurements..........................................50 Table 5: Descriptive statistics stratified against eye color...59 Table 6: Descriptive statistics stratified against gender. .....................................64 Table 7: Descriptive

  11. Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting

    NASA Technical Reports Server (NTRS)

    Badavi, F. F.; Everhart, Joel L.

    1987-01-01

    This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.

  12. Appraisal of within- and between-laboratory reproducibility of non-radioisotopic local lymph node assay using flow cytometry, LLNA:BrdU-FCM: comparison of OECD TG429 performance standard and statistical evaluation.

    PubMed

    Yang, Hyeri; Na, Jihye; Jang, Won-Hee; Jung, Mi-Sook; Jeon, Jun-Young; Heo, Yong; Yeo, Kyung-Wook; Jo, Ji-Hoon; Lim, Kyung-Min; Bae, SeungJin

    2015-05-05

    Mouse local lymph node assay (LLNA, OECD TG429) is an alternative test replacing conventional guinea pig tests (OECD TG406) for the skin sensitization test but the use of a radioisotopic agent, (3)H-thymidine, deters its active dissemination. New non-radioisotopic LLNA, LLNA:BrdU-FCM employs a non-radioisotopic analog, 5-bromo-2'-deoxyuridine (BrdU) and flow cytometry. For an analogous method, OECD TG429 performance standard (PS) advises that two reference compounds be tested repeatedly and ECt(threshold) values obtained must fall within acceptable ranges to prove within- and between-laboratory reproducibility. However, this criteria is somewhat arbitrary and sample size of ECt is less than 5, raising concerns about insufficient reliability. Here, we explored various statistical methods to evaluate the reproducibility of LLNA:BrdU-FCM with stimulation index (SI), the raw data for ECt calculation, produced from 3 laboratories. Descriptive statistics along with graphical representation of SI was presented. For inferential statistics, parametric and non-parametric methods were applied to test the reproducibility of SI of a concurrent positive control and the robustness of results were investigated. Descriptive statistics and graphical representation of SI alone could illustrate the within- and between-laboratory reproducibility. Inferential statistics employing parametric and nonparametric methods drew similar conclusion. While all labs passed within- and between-laboratory reproducibility criteria given by OECD TG429 PS based on ECt values, statistical evaluation based on SI values showed that only two labs succeeded in achieving within-laboratory reproducibility. For those two labs that satisfied the within-lab reproducibility, between-laboratory reproducibility could be also attained based on inferential as well as descriptive statistics. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Statistical analysis of vehicle crashes in Mississippi based on crash data from 2010 to 2014.

    DOT National Transportation Integrated Search

    2017-08-15

    Traffic crash data from 2010 to 2014 were collected by Mississippi Department of Transportation (MDOT) and extracted for the study. Three tasks were conducted in this study: (1) geographic distribution of crashes; (2) descriptive statistics of crash ...

  14. Reframing Serial Murder Within Empirical Research.

    PubMed

    Gurian, Elizabeth A

    2017-04-01

    Empirical research on serial murder is limited due to the lack of consensus on a definition, the continued use of primarily descriptive statistics, and linkage to popular culture depictions. These limitations also inhibit our understanding of these offenders and affect credibility in the field of research. Therefore, this comprehensive overview of a sample of 508 cases (738 total offenders, including partnered groups of two or more offenders) provides analyses of solo male, solo female, and partnered serial killers to elucidate statistical differences and similarities in offending and adjudication patterns among the three groups. This analysis of serial homicide offenders not only supports previous research on offending patterns present in the serial homicide literature but also reveals that empirically based analyses can enhance our understanding beyond traditional case studies and descriptive statistics. Further research based on these empirical analyses can aid in the development of more accurate classifications and definitions of serial murderers.

  15. Data-Base for Communication Planning. The Basic and Statistical Data Required for the Elaboration of a Plan for a National Communication System.

    ERIC Educational Resources Information Center

    Rahim, Syed A.

    Based in part on a list developed by the United Nations Educational, Scientific, and Cultural Organization (UNESCO) for use in Afghanistan, this document presents a comprehensive checklist of items of statistical and descriptive data required for planning a national communication system. It is noted that such a system provides the vital…

  16. Physics-based statistical learning approach to mesoscopic model selection.

    PubMed

    Taverniers, Søren; Haut, Terry S; Barros, Kipton; Alexander, Francis J; Lookman, Turab

    2015-11-01

    In materials science and many other research areas, models are frequently inferred without considering their generalization to unseen data. We apply statistical learning using cross-validation to obtain an optimally predictive coarse-grained description of a two-dimensional kinetic nearest-neighbor Ising model with Glauber dynamics (GD) based on the stochastic Ginzburg-Landau equation (sGLE). The latter is learned from GD "training" data using a log-likelihood analysis, and its predictive ability for various complexities of the model is tested on GD "test" data independent of the data used to train the model on. Using two different error metrics, we perform a detailed analysis of the error between magnetization time trajectories simulated using the learned sGLE coarse-grained description and those obtained using the GD model. We show that both for equilibrium and out-of-equilibrium GD training trajectories, the standard phenomenological description using a quartic free energy does not always yield the most predictive coarse-grained model. Moreover, increasing the amount of training data can shift the optimal model complexity to higher values. Our results are promising in that they pave the way for the use of statistical learning as a general tool for materials modeling and discovery.

  17. "Magnitude-based inference": a statistical review.

    PubMed

    Welsh, Alan H; Knight, Emma J

    2015-04-01

    We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.

  18. Quality of reporting statistics in two Indian pharmacology journals.

    PubMed

    Jaykaran; Yadav, Preeti

    2011-04-01

    To evaluate the reporting of the statistical methods in articles published in two Indian pharmacology journals. All original articles published since 2002 were downloaded from the journals' (Indian Journal of Pharmacology (IJP) and Indian Journal of Physiology and Pharmacology (IJPP)) website. These articles were evaluated on the basis of appropriateness of descriptive statistics and inferential statistics. Descriptive statistics was evaluated on the basis of reporting of method of description and central tendencies. Inferential statistics was evaluated on the basis of fulfilling of assumption of statistical methods and appropriateness of statistical tests. Values are described as frequencies, percentage, and 95% confidence interval (CI) around the percentages. Inappropriate descriptive statistics was observed in 150 (78.1%, 95% CI 71.7-83.3%) articles. Most common reason for this inappropriate descriptive statistics was use of mean ± SEM at the place of "mean (SD)" or "mean ± SD." Most common statistical method used was one-way ANOVA (58.4%). Information regarding checking of assumption of statistical test was mentioned in only two articles. Inappropriate statistical test was observed in 61 (31.7%, 95% CI 25.6-38.6%) articles. Most common reason for inappropriate statistical test was the use of two group test for three or more groups. Articles published in two Indian pharmacology journals are not devoid of statistical errors.

  19. The Social Profile of Students in Basic General Education in Ecuador: A Data Analysis

    ERIC Educational Resources Information Center

    Buri, Olga Elizabeth Minchala; Stefos, Efstathios

    2017-01-01

    The objective of this study is to examine the social profile of students who are enrolled in Basic General Education in Ecuador. Both a descriptive and multidimensional statistical analysis was carried out based on the data provided by the National Survey of Employment, Unemployment and Underemployment in 2015. The descriptive analysis shows the…

  20. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed Central

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  1. Complex networks as a unified framework for descriptive analysis and predictive modeling in climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinhaeuser, Karsten J K; Chawla, Nitesh; Ganguly, Auroop R

    The analysis of climate data has relied heavily on hypothesis-driven statistical methods, while projections of future climate are based primarily on physics-based computational models. However, in recent years a wealth of new datasets has become available. Therefore, we take a more data-centric approach and propose a unified framework for studying climate, with an aim towards characterizing observed phenomena as well as discovering new knowledge in the climate domain. Specifically, we posit that complex networks are well-suited for both descriptive analysis and predictive modeling tasks. We show that the structural properties of climate networks have useful interpretation within the domain. Further,more » we extract clusters from these networks and demonstrate their predictive power as climate indices. Our experimental results establish that the network clusters are statistically significantly better predictors than clusters derived using a more traditional clustering approach. Using complex networks as data representation thus enables the unique opportunity for descriptive and predictive modeling to inform each other.« less

  2. Moments of inclination error distribution computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  3. Quality of reporting statistics in two Indian pharmacology journals

    PubMed Central

    Jaykaran; Yadav, Preeti

    2011-01-01

    Objective: To evaluate the reporting of the statistical methods in articles published in two Indian pharmacology journals. Materials and Methods: All original articles published since 2002 were downloaded from the journals’ (Indian Journal of Pharmacology (IJP) and Indian Journal of Physiology and Pharmacology (IJPP)) website. These articles were evaluated on the basis of appropriateness of descriptive statistics and inferential statistics. Descriptive statistics was evaluated on the basis of reporting of method of description and central tendencies. Inferential statistics was evaluated on the basis of fulfilling of assumption of statistical methods and appropriateness of statistical tests. Values are described as frequencies, percentage, and 95% confidence interval (CI) around the percentages. Results: Inappropriate descriptive statistics was observed in 150 (78.1%, 95% CI 71.7–83.3%) articles. Most common reason for this inappropriate descriptive statistics was use of mean ± SEM at the place of “mean (SD)” or “mean ± SD.” Most common statistical method used was one-way ANOVA (58.4%). Information regarding checking of assumption of statistical test was mentioned in only two articles. Inappropriate statistical test was observed in 61 (31.7%, 95% CI 25.6–38.6%) articles. Most common reason for inappropriate statistical test was the use of two group test for three or more groups. Conclusion: Articles published in two Indian pharmacology journals are not devoid of statistical errors. PMID:21772766

  4. Descriptive and dynamic psychiatry: a perspective on DSM-III.

    PubMed

    Frances, A; Cooper, A M

    1981-09-01

    The APA Task Force on Nomenclature and Statistics attempted to make DSM-III a descriptive nosology that is atheoretical in regard to etiology. The authors believe that a sharp polarity between morphological classification and explanatory formulation is artificial and misleading, and they critically review DSM-III from a psychodynamic perspective. They compare and contrast the descriptive orientation in psychiatry with the psychodynamic orientation and conclude that the two approaches overlap, that they are complementary and necessary to each other, and that there is a descriptive data base underlying dynamic psychiatry which may be usefully included in future nomenclatures.

  5. An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform

    DTIC Science & Technology

    2018-01-01

    collected data. These statistical techniques are under the area of descriptive statistics, which is a methodology to condense the data in quantitative ...ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...report when it is no longer needed. Do not return it to the originator. ARL-TR-8270 ● JAN 2017 US Army Research Laboratory An

  6. Descriptive data analysis.

    PubMed

    Thompson, Cheryl Bagley

    2009-01-01

    This 13th article of the Basics of Research series is first in a short series on statistical analysis. These articles will discuss creating your statistical analysis plan, levels of measurement, descriptive statistics, probability theory, inferential statistics, and general considerations for interpretation of the results of a statistical analysis.

  7. “Magnitude-based Inference”: A Statistical Review

    PubMed Central

    Welsh, Alan H.; Knight, Emma J.

    2015-01-01

    ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387

  8. Nonlinear Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.; Badavi, Forooz F.

    1989-01-01

    Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.

  9. Descriptive statistics.

    PubMed

    Nick, Todd G

    2007-01-01

    Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.

  10. Driver/Vehicle Characteristics in Rear-End Precrash Scenarios Based on the General Estimates System (GES).

    DOT National Transportation Integrated Search

    1999-03-01

    This paper studies different driver and vehicle characteristics as they impact pre-crash scenarios of rear-end collisions. It gives a statistical description of the five most frequently occurring rear-end precrash scenarios based on vehicle and drive...

  11. Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review.

    PubMed

    Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C

    2018-03-07

    Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.

  12. Visual Survey of Infantry Troops. Part 1. Visual Acuity, Refractive Status, Interpupillary Distance and Visual Skills

    DTIC Science & Technology

    1989-06-01

    letters on one line and several letters on the next line, there is no accurate way to credit these extra letters for statistical analysis. The decimal and...contains the descriptive statistics of the objective refractive error components of infantrymen. Figures 8-11 show the frequency distributions for sphere...equivalents. Nonspectacle wearers Table 12 contains the idescriptive statistics for non- spectacle wearers. Based or these refractive error data, about 30

  13. Statistical description of non-Gaussian samples in the F2 layer of the ionosphere during heliogeophysical disturbances

    NASA Astrophysics Data System (ADS)

    Sergeenko, N. P.

    2017-11-01

    An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.

  14. Descriptive statistics: the specification of statistical measures and their presentation in tables and graphs. Part 7 of a series on evaluation of scientific publications.

    PubMed

    Spriestersbach, Albert; Röhrig, Bernd; du Prel, Jean-Baptist; Gerhold-Ay, Aslihan; Blettner, Maria

    2009-09-01

    Descriptive statistics are an essential part of biometric analysis and a prerequisite for the understanding of further statistical evaluations, including the drawing of inferences. When data are well presented, it is usually obvious whether the author has collected and evaluated them correctly and in keeping with accepted practice in the field. Statistical variables in medicine may be of either the metric (continuous, quantitative) or categorical (nominal, ordinal) type. Easily understandable examples are given. Basic techniques for the statistical description of collected data are presented and illustrated with examples. The goal of a scientific study must always be clearly defined. The definition of the target value or clinical endpoint determines the level of measurement of the variables in question. Nearly all variables, whatever their level of measurement, can be usefully presented graphically and numerically. The level of measurement determines what types of diagrams and statistical values are appropriate. There are also different ways of presenting combinations of two independent variables graphically and numerically. The description of collected data is indispensable. If the data are of good quality, valid and important conclusions can already be drawn when they are properly described. Furthermore, data description provides a basis for inferential statistics.

  15. Computer-Based Feedback and Goal Intervention: Learning Effects

    ERIC Educational Resources Information Center

    Valdez, Alfred

    2012-01-01

    This study investigated how a goal intervention influences the learning effects gained from feedback when acquiring concepts and rules pertaining to the topic of descriptive statistics. Three feedback conditions; knowledge of correct response feedback (KCRF), principle-based feedback (PBF), and no-feedback (NF), were crossed with two goal…

  16. The Analysis of Organizational Diagnosis on Based Six Box Model in Universities

    ERIC Educational Resources Information Center

    Hamid, Rahimi; Siadat, Sayyed Ali; Reza, Hoveida; Arash, Shahin; Ali, Nasrabadi Hasan; Azizollah, Arbabisarjou

    2011-01-01

    Purpose: The analysis of organizational diagnosis on based six box model at universities. Research method: Research method was descriptive-survey. Statistical population consisted of 1544 faculty members of universities which through random strafed sampling method 218 persons were chosen as the sample. Research Instrument were organizational…

  17. Application of the Variety-Generator Approach to Searches of Personal Names in Bibliographic Data Bases - Part 1. Microstructure of Personal Authors' Names

    ERIC Educational Resources Information Center

    Fokker, Dirk W.; Lynch, Michael F.

    1974-01-01

    Variety-generator approach seeks to reflect the microstructure of data elements in their description for storage and search, and takes advantage of the consistency of statistical characteristics of data elements in homogeneous data bases. (Author)

  18. Complete integrability of information processing by biochemical reactions

    PubMed Central

    Agliari, Elena; Barra, Adriano; Dello Schiavo, Lorenzo; Moro, Antonio

    2016-01-01

    Statistical mechanics provides an effective framework to investigate information processing in biochemical reactions. Within such framework far-reaching analogies are established among (anti-) cooperative collective behaviors in chemical kinetics, (anti-)ferromagnetic spin models in statistical mechanics and operational amplifiers/flip-flops in cybernetics. The underlying modeling – based on spin systems – has been proved to be accurate for a wide class of systems matching classical (e.g. Michaelis–Menten, Hill, Adair) scenarios in the infinite-size approximation. However, the current research in biochemical information processing has been focusing on systems involving a relatively small number of units, where this approximation is no longer valid. Here we show that the whole statistical mechanical description of reaction kinetics can be re-formulated via a mechanical analogy – based on completely integrable hydrodynamic-type systems of PDEs – which provides explicit finite-size solutions, matching recently investigated phenomena (e.g. noise-induced cooperativity, stochastic bi-stability, quorum sensing). The resulting picture, successfully tested against a broad spectrum of data, constitutes a neat rationale for a numerically effective and theoretically consistent description of collective behaviors in biochemical reactions. PMID:27812018

  19. Complete integrability of information processing by biochemical reactions

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Barra, Adriano; Dello Schiavo, Lorenzo; Moro, Antonio

    2016-11-01

    Statistical mechanics provides an effective framework to investigate information processing in biochemical reactions. Within such framework far-reaching analogies are established among (anti-) cooperative collective behaviors in chemical kinetics, (anti-)ferromagnetic spin models in statistical mechanics and operational amplifiers/flip-flops in cybernetics. The underlying modeling - based on spin systems - has been proved to be accurate for a wide class of systems matching classical (e.g. Michaelis-Menten, Hill, Adair) scenarios in the infinite-size approximation. However, the current research in biochemical information processing has been focusing on systems involving a relatively small number of units, where this approximation is no longer valid. Here we show that the whole statistical mechanical description of reaction kinetics can be re-formulated via a mechanical analogy - based on completely integrable hydrodynamic-type systems of PDEs - which provides explicit finite-size solutions, matching recently investigated phenomena (e.g. noise-induced cooperativity, stochastic bi-stability, quorum sensing). The resulting picture, successfully tested against a broad spectrum of data, constitutes a neat rationale for a numerically effective and theoretically consistent description of collective behaviors in biochemical reactions.

  20. Complete integrability of information processing by biochemical reactions.

    PubMed

    Agliari, Elena; Barra, Adriano; Dello Schiavo, Lorenzo; Moro, Antonio

    2016-11-04

    Statistical mechanics provides an effective framework to investigate information processing in biochemical reactions. Within such framework far-reaching analogies are established among (anti-) cooperative collective behaviors in chemical kinetics, (anti-)ferromagnetic spin models in statistical mechanics and operational amplifiers/flip-flops in cybernetics. The underlying modeling - based on spin systems - has been proved to be accurate for a wide class of systems matching classical (e.g. Michaelis-Menten, Hill, Adair) scenarios in the infinite-size approximation. However, the current research in biochemical information processing has been focusing on systems involving a relatively small number of units, where this approximation is no longer valid. Here we show that the whole statistical mechanical description of reaction kinetics can be re-formulated via a mechanical analogy - based on completely integrable hydrodynamic-type systems of PDEs - which provides explicit finite-size solutions, matching recently investigated phenomena (e.g. noise-induced cooperativity, stochastic bi-stability, quorum sensing). The resulting picture, successfully tested against a broad spectrum of data, constitutes a neat rationale for a numerically effective and theoretically consistent description of collective behaviors in biochemical reactions.

  1. [Nursing care time in a teaching hospital].

    PubMed

    Rogenski, Karin Emília; Fugulin, Fernanda Maria Togeiro; Gaidzinski, Raquel Rapone; Rogenski, Noemi Marisa Brunet

    2011-03-01

    This is a quantitative exploratory, descriptive study performed with the objective to identify and analyze the performance of the average time of nursing care delivered to patients of the Inpatient Units of the University Hospital at University of São Paulo (UH-USP), from 2001 to 2005. The average nursing care time delivered to patients of the referred units was identified by applying of a mathematical equation proposed in the literature, after surveying data from the Medical and Statistical Service and based on the monthly working shifts of the nursing professionals. Data analysis was performed using descriptive statistics. The average nursing care time observed in most units, despite some variations, remained stable during the analyzed period. Based on this observed stability, it is concluded that the nursing staff in the referred HU-USP units has been continuously evaluated with the purposes of maintaining the average time of assistance and, thus, the quality of the care being delivered.

  2. Statistical Agent Based Modelization of the Phenomenon of Drug Abuse

    NASA Astrophysics Data System (ADS)

    di Clemente, Riccardo; Pietronero, Luciano

    2012-07-01

    We introduce a statistical agent based model to describe the phenomenon of drug abuse and its dynamical evolution at the individual and global level. The agents are heterogeneous with respect to their intrinsic inclination to drugs, to their budget attitude and social environment. The various levels of drug use were inspired by the professional description of the phenomenon and this permits a direct comparison with all available data. We show that certain elements have a great importance to start the use of drugs, for example the rare events in the personal experiences which permit to overcame the barrier of drug use occasionally. The analysis of how the system reacts to perturbations is very important to understand its key elements and it provides strategies for effective policy making. The present model represents the first step of a realistic description of this phenomenon and can be easily generalized in various directions.

  3. Experimental observations of Lagrangian sand grain kinematics under bedload transport: statistical description of the step and rest regimes

    NASA Astrophysics Data System (ADS)

    Guala, M.; Liu, M.

    2017-12-01

    The kinematics of sediment particles is investigated by non-intrusive imaging methods to provide a statistical description of bedload transport in conditions near the threshold of motion. In particular, we focus on the cyclic transition between motion and rest regimes to quantify the waiting time statistics inferred to be responsible for anomalous diffusion, and so far elusive. Despite obvious limitations in the spatio-temporal domain of the observations, we are able to identify the probability distributions of the particle step time and length, velocity, acceleration, waiting time, and thus distinguish which quantities exhibit well converged mean values, based on the thickness of their respective tails. The experimental results shown here for four different transport conditions highlight the importance of the waiting time distribution and represent a benchmark dataset for the stochastic modeling of bedload transport.

  4. Representing Micro-Macro Linkages by Actor-Based Dynamic Network Models

    PubMed Central

    Snijders, Tom A.B.; Steglich, Christian E.G.

    2014-01-01

    Stochastic actor-based models for network dynamics have the primary aim of statistical inference about processes of network change, but may be regarded as a kind of agent-based models. Similar to many other agent-based models, they are based on local rules for actor behavior. Different from many other agent-based models, by including elements of generalized linear statistical models they aim to be realistic detailed representations of network dynamics in empirical data sets. Statistical parallels to micro-macro considerations can be found in the estimation of parameters determining local actor behavior from empirical data, and the assessment of goodness of fit from the correspondence with network-level descriptives. This article studies several network-level consequences of dynamic actor-based models applied to represent cross-sectional network data. Two examples illustrate how network-level characteristics can be obtained as emergent features implied by micro-specifications of actor-based models. PMID:25960578

  5. Nondeterministic data base for computerized visual perception

    NASA Technical Reports Server (NTRS)

    Yakimovsky, Y.

    1976-01-01

    A description is given of the knowledge representation data base in the perception subsystem of the Mars robot vehicle prototype. Two types of information are stored. The first is generic information that represents general rules that are conformed to by structures in the expected environments. The second kind of information is a specific description of a structure, i.e., the properties and relations of objects in the specific case being analyzed. The generic knowledge is represented so that it can be applied to extract and infer the description of specific structures. The generic model of the rules is substantially a Bayesian representation of the statistics of the environment, which means it is geared to representation of nondeterministic rules relating properties of, and relations between, objects. The description of a specific structure is also nondeterministic in the sense that all properties and relations may take a range of values with an associated probability distribution.

  6. Statistical analysis and interpretation of prenatal diagnostic imaging studies, Part 2: descriptive and inferential statistical methods.

    PubMed

    Tuuli, Methodius G; Odibo, Anthony O

    2011-08-01

    The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.

  7. Comparative Research of Navy Voluntary Education at Operational Commands

    DTIC Science & Technology

    2017-03-01

    return on investment, ROI, logistic regression, multivariate analysis, descriptive statistics, Markov, time-series, linear programming 15. NUMBER...21  B.  DESCRIPTIVE STATISTICS TABLES ...............................................25  C.  PRIVACY CONSIDERATIONS...THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF TABLES Table 1.  Variables and Descriptions . Adapted from NETC (2016). .......................21

  8. Analysis of Professional and Pre-Accession Characteristics and Junior Naval Officer Performance

    DTIC Science & Technology

    2018-03-01

    REVIEW .............................................5 A. NAVY PERFORMANCE EVALUATION SYSTEM ............................5 B. PROFESSIONAL...17 A. DATA DESCRIPTION ...........................................................................17 B. SUMMARY...STATISTICS ......................................................................24 C. DESCRIPTIVE STATISTICS

  9. A Framework for Assessing High School Students' Statistical Reasoning.

    PubMed

    Chan, Shiau Wei; Ismail, Zaleha; Sumintono, Bambang

    2016-01-01

    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students' statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students' statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework's cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments.

  10. A Framework for Assessing High School Students' Statistical Reasoning

    PubMed Central

    2016-01-01

    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students’ statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students’ statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework’s cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments. PMID:27812091

  11. The Impact of Team-Based Learning on Nervous System Examination Knowledge of Nursing Students.

    PubMed

    Hemmati Maslakpak, Masomeh; Parizad, Naser; Zareie, Farzad

    2015-12-01

    Team-based learning is one of the active learning approaches in which independent learning is combined with small group discussion in the class. This study aimed to determine the impact of team-based learning in nervous system examination knowledge of nursing students. This quasi-experimental study was conducted on 3(rd) grade nursing students, including 5th semester (intervention group) and 6(th) semester (control group). The traditional lecture method and the team-based learning method were used for educating the examination of the nervous system for intervention and control groups, respectively. The data were collected by a test covering 40-questions (multiple choice, matching, gap-filling and descriptive questions) before and after intervention in both groups. Individual Readiness Assurance Test (RAT) and Group Readiness Assurance Test (GRAT) used to collect data in the intervention group. In the end, the collected data were analyzed by SPSS ver. 13 using descriptive and inferential statistical tests. In team-based learning group, mean and standard deviation was 13.39 (4.52) before the intervention, which had been increased to 31.07 (3.20) after the intervention and this increase was statistically significant. Also, there was a statistically significant difference between the scores of RAT and GRAT in team-based learning group. Using team-based learning approach resulted in much better improvement and stability in the nervous system examination knowledge of nursing students compared to traditional lecture method; therefore, this method could be efficiently used as an effective educational approach in nursing education.

  12. Descriptive Statistical Techniques for Librarians. 2nd Edition.

    ERIC Educational Resources Information Center

    Hafner, Arthur W.

    A thorough understanding of the uses and applications of statistical techniques is integral in gaining support for library funding or new initiatives. This resource is designed to help practitioners develop and manipulate descriptive statistical information in evaluating library services, tracking and controlling limited resources, and analyzing…

  13. Nurses' readiness for evidence-based practice at Finnish university hospitals: a national survey.

    PubMed

    Saunders, Hannele; Stevens, Kathleen R; Vehviläinen-Julkunen, Katri

    2016-08-01

    The aim of this study was to determine nurses' readiness for evidence-based practice at Finnish university hospitals. Although systematic implementation of evidence-based practice is essential to effectively improving patient outcomes and value of care, nurses do not consistently use evidence in practice. Uptake is hampered by lack of nurses' individual and organizational readiness for evidence-based practice. Although nurses' evidence-based practice competencies have been widely studied in countries leading the evidence-based practice movement, less is known about nurses' readiness for evidence-based practice in the non-English-speaking world. A cross-sectional descriptive survey design. The study was conducted in November-December 2014 in every university hospital in Finland with a convenience sample (n = 943) of practicing nurses. The electronic survey data were collected using the Stevens' Evidence-Based Practice Readiness Inventory, which was translated into Finnish according to standardized guidelines for translation of research instruments. The data were analysed using descriptive and inferential statistics. Nurses reported low to moderate levels of self-efficacy and low levels of evidence-based practice knowledge. A statistically significant, direct correlation was found between nurses' self-efficacy in employing evidence-based practice and their actual evidence-based practice knowledge level. Several statistically significant differences were found between nurses' socio-demographic variables and nurses' self-efficacy in employing evidence-based practice, and actual and perceived evidence-based practice knowledge. Finnish nurses at university hospitals are not ready for evidence-based practice. Although nurses are familiar with the concept of evidence-based practice, they lack the evidence-based practice knowledge and self-efficacy in employing evidence-based practice required for integrating best evidence into clinical care delivery. © 2016 John Wiley & Sons Ltd.

  14. Different Manhattan project: automatic statistical model generation

    NASA Astrophysics Data System (ADS)

    Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore

    2002-03-01

    We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.

  15. Descriptive and inferential statistical methods used in burns research.

    PubMed

    Al-Benna, Sammy; Al-Ajam, Yazan; Way, Benjamin; Steinstraesser, Lars

    2010-05-01

    Burns research articles utilise a variety of descriptive and inferential methods to present and analyse data. The aim of this study was to determine the descriptive methods (e.g. mean, median, SD, range, etc.) and survey the use of inferential methods (statistical tests) used in articles in the journal Burns. This study defined its population as all original articles published in the journal Burns in 2007. Letters to the editor, brief reports, reviews, and case reports were excluded. Study characteristics, use of descriptive statistics and the number and types of statistical methods employed were evaluated. Of the 51 articles analysed, 11(22%) were randomised controlled trials, 18(35%) were cohort studies, 11(22%) were case control studies and 11(22%) were case series. The study design and objectives were defined in all articles. All articles made use of continuous and descriptive data. Inferential statistics were used in 49(96%) articles. Data dispersion was calculated by standard deviation in 30(59%). Standard error of the mean was quoted in 19(37%). The statistical software product was named in 33(65%). Of the 49 articles that used inferential statistics, the tests were named in 47(96%). The 6 most common tests used (Student's t-test (53%), analysis of variance/co-variance (33%), chi(2) test (27%), Wilcoxon & Mann-Whitney tests (22%), Fisher's exact test (12%)) accounted for the majority (72%) of statistical methods employed. A specified significance level was named in 43(88%) and the exact significance levels were reported in 28(57%). Descriptive analysis and basic statistical techniques account for most of the statistical tests reported. This information should prove useful in deciding which tests should be emphasised in educating burn care professionals. These results highlight the need for burn care professionals to have a sound understanding of basic statistics, which is crucial in interpreting and reporting data. Advice should be sought from professionals in the fields of biostatistics and epidemiology when using more advanced statistical techniques. Copyright 2009 Elsevier Ltd and ISBI. All rights reserved.

  16. Selection bias of Internet panel surveys: a comparison with a paper-based survey and national governmental statistics in Japan.

    PubMed

    Tsuboi, Satoshi; Yoshida, Honami; Ae, Ryusuke; Kojo, Takao; Nakamura, Yosikazu; Kitamura, Kunio

    2015-03-01

    To investigate the selection bias of an Internet panel survey organized by a commercial company. A descriptive study was conducted. The authors compared the characteristics of the Internet panel survey with a national paper-based survey and with national governmental statistics in Japan. The participants in the Internet panel survey were composed of more women, were older, and resided in large cities. Regardless of age and sex, the prevalence of highly educated people in the Internet panel survey was higher than in the paper-based survey and the national statistics. In men, the prevalence of heavy drinkers among the 30- to 49-year-old population and of habitual smokers among the 20- to 49-year-old population in the Internet panel survey was lower than what was found in the national statistics. The estimated characteristics of commercial Internet panel surveys were quite different from the national statistical data. In a commercial Internet panel survey, selection bias should not be underestimated. © 2012 APJPH.

  17. Writing to Learn Statistics in an Advanced Placement Statistics Course

    ERIC Educational Resources Information Center

    Northrup, Christian Glenn

    2012-01-01

    This study investigated the use of writing in a statistics classroom to learn if writing provided a rich description of problem-solving processes of students as they solved problems. Through analysis of 329 written samples provided by students, it was determined that writing provided a rich description of problem-solving processes and enabled…

  18. The Greyhound Strike: Using a Labor Dispute to Teach Descriptive Statistics.

    ERIC Educational Resources Information Center

    Shatz, Mark A.

    1985-01-01

    A simulation exercise of a labor-management dispute is used to teach psychology students some of the basics of descriptive statistics. Using comparable data sets generated by the instructor, students work in small groups to develop a statistical presentation that supports their particular position in the dispute. (Author/RM)

  19. To Be Alone or in a Group: An Exploration into How the School-Based Experiences Differ for Black Male Teachers across One Urban School District

    ERIC Educational Resources Information Center

    Bristol, Travis J.

    2018-01-01

    One urban district administered the Black Male Teacher Environment Survey (BMTES) to each of its Black male teachers to measure their school-based experiences. This article highlights descriptive statistics from the 86 Black male teacher respondents. Findings suggest that participants' background characteristics and school-based experiences varied…

  20. Statistical Models of Fracture Relevant to Nuclear-Grade Graphite: Review and Recommendations

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Bratton, Robert L.

    2011-01-01

    The nuclear-grade (low-impurity) graphite needed for the fuel element and moderator material for next-generation (Gen IV) reactors displays large scatter in strength and a nonlinear stress-strain response from damage accumulation. This response can be characterized as quasi-brittle. In this expanded review, relevant statistical failure models for various brittle and quasi-brittle material systems are discussed with regard to strength distribution, size effect, multiaxial strength, and damage accumulation. This includes descriptions of the Weibull, Batdorf, and Burchell models as well as models that describe the strength response of composite materials, which involves distributed damage. Results from lattice simulations are included for a physics-based description of material breakdown. Consideration is given to the predicted transition between brittle and quasi-brittle damage behavior versus the density of damage (level of disorder) within the material system. The literature indicates that weakest-link-based failure modeling approaches appear to be reasonably robust in that they can be applied to materials that display distributed damage, provided that the level of disorder in the material is not too large. The Weibull distribution is argued to be the most appropriate statistical distribution to model the stochastic-strength response of graphite.

  1. Evaluating Abstract Art: Relation between Term Usage, Subjective Ratings, Image Properties and Personality Traits.

    PubMed

    Lyssenko, Nathalie; Redies, Christoph; Hayn-Leichsenring, Gregor U

    2016-01-01

    One of the major challenges in experimental aesthetics is the uncertainty of the terminology used in experiments. In this study, we recorded terms that are spontaneously used by participants to describe abstract artworks and studied their relation to the second-order statistical image properties of the same artworks (Experiment 1). We found that the usage frequency of some structure-describing terms correlates with statistical image properties, such as PHOG Self-Similarity, Anisotropy and Complexity. Additionally, emotion-associated terms correlate with measured color values. Next, based on the most frequently used terms, we created five different rating scales (Experiment 2) and obtained ratings of participants for the abstract paintings on these scales. We found significant correlations between descriptive score ratings (e.g., between structure and subjective complexity), between evaluative and descriptive score ratings (e.g., between preference and subjective complexity/interest) and between descriptive score ratings and statistical image properties (e.g., between interest and PHOG Self-Similarity, Complexity and Anisotropy). Additionally, we determined the participants' personality traits as described in the 'Big Five Inventory' (Goldberg, 1990; Rammstedt and John, 2005) and correlated them with the ratings and preferences of individual participants. Participants with higher scores for Neuroticism showed preferences for objectively more complex images, as well as a different notion of the term complex when compared with participants with lower scores for Neuroticism. In conclusion, this study demonstrates an association between objectively measured image properties and the subjective terms that participants use to describe or evaluate abstract artworks. Moreover, our results suggest that the description of abstract artworks, their evaluation and the preference of participants for their low-level statistical properties are linked to personality traits.

  2. Evaluating Abstract Art: Relation between Term Usage, Subjective Ratings, Image Properties and Personality Traits

    PubMed Central

    Lyssenko, Nathalie; Redies, Christoph; Hayn-Leichsenring, Gregor U.

    2016-01-01

    One of the major challenges in experimental aesthetics is the uncertainty of the terminology used in experiments. In this study, we recorded terms that are spontaneously used by participants to describe abstract artworks and studied their relation to the second-order statistical image properties of the same artworks (Experiment 1). We found that the usage frequency of some structure-describing terms correlates with statistical image properties, such as PHOG Self-Similarity, Anisotropy and Complexity. Additionally, emotion-associated terms correlate with measured color values. Next, based on the most frequently used terms, we created five different rating scales (Experiment 2) and obtained ratings of participants for the abstract paintings on these scales. We found significant correlations between descriptive score ratings (e.g., between structure and subjective complexity), between evaluative and descriptive score ratings (e.g., between preference and subjective complexity/interest) and between descriptive score ratings and statistical image properties (e.g., between interest and PHOG Self-Similarity, Complexity and Anisotropy). Additionally, we determined the participants’ personality traits as described in the ‘Big Five Inventory’ (Goldberg, 1990; Rammstedt and John, 2005) and correlated them with the ratings and preferences of individual participants. Participants with higher scores for Neuroticism showed preferences for objectively more complex images, as well as a different notion of the term complex when compared with participants with lower scores for Neuroticism. In conclusion, this study demonstrates an association between objectively measured image properties and the subjective terms that participants use to describe or evaluate abstract artworks. Moreover, our results suggest that the description of abstract artworks, their evaluation and the preference of participants for their low-level statistical properties are linked to personality traits. PMID:27445933

  3. Rear-End Crashes: Problem Size Assessment And Statistical Description

    DOT National Transportation Integrated Search

    1993-05-01

    KEYWORDS : RESEARCH AND DEVELOPMENT OR R&D, ADVANCED VEHICLE CONTROL & SAFETY SYSTEMS OR AVCSS, INTELLIGENT VEHICLE INITIATIVE OR IVI : THIS DOCUMENT PRESENTS PROBLEM SIZE ASSESSMENTS AND STATISTICAL CRASH DESCRIPTION FOR REAR-END CRASHES, INC...

  4. Statistics in the pharmacy literature.

    PubMed

    Lee, Charlene M; Soin, Herpreet K; Einarson, Thomas R

    2004-09-01

    Research in statistical methods is essential for maintenance of high quality of the published literature. To update previous reports of the types and frequencies of statistical terms and procedures in research studies of selected professional pharmacy journals. We obtained all research articles published in 2001 in 6 journals: American Journal of Health-System Pharmacy, The Annals of Pharmacotherapy, Canadian Journal of Hospital Pharmacy, Formulary, Hospital Pharmacy, and Journal of the American Pharmaceutical Association. Two independent reviewers identified and recorded descriptive and inferential statistical terms/procedures found in the methods, results, and discussion sections of each article. Results were determined by tallying the total number of times, as well as the percentage, that each statistical term or procedure appeared in the articles. One hundred forty-four articles were included. Ninety-eight percent employed descriptive statistics; of these, 28% used only descriptive statistics. The most common descriptive statistical terms were percentage (90%), mean (74%), standard deviation (58%), and range (46%). Sixty-nine percent of the articles used inferential statistics, the most frequent being chi(2) (33%), Student's t-test (26%), Pearson's correlation coefficient r (18%), ANOVA (14%), and logistic regression (11%). Statistical terms and procedures were found in nearly all of the research articles published in pharmacy journals. Thus, pharmacy education should aim to provide current and future pharmacists with an understanding of the common statistical terms and procedures identified to facilitate the appropriate appraisal and consequential utilization of the information available in research articles.

  5. Probabilistic Meteorological Characterization for Turbine Loads

    NASA Astrophysics Data System (ADS)

    Kelly, M.; Larsen, G.; Dimitrov, N. K.; Natarajan, A.

    2014-06-01

    Beyond the existing, limited IEC prescription to describe fatigue loads on wind turbines, we look towards probabilistic characterization of the loads via analogous characterization of the atmospheric flow, particularly for today's "taller" turbines with rotors well above the atmospheric surface layer. Based on both data from multiple sites as well as theoretical bases from boundary-layer meteorology and atmospheric turbulence, we offer probabilistic descriptions of shear and turbulence intensity, elucidating the connection of each to the other as well as to atmospheric stability and terrain. These are used as input to loads calculation, and with a statistical loads output description, they allow for improved design and loads calculations.

  6. A Case Study on Teaching the Topic "Experimental Unit" and How It Is Presented in Advanced Placement Statistics Textbooks

    ERIC Educational Resources Information Center

    Perrett, Jamis J.

    2012-01-01

    This article demonstrates how textbooks differ in their description of the term "experimental unit". Advanced Placement Statistics teachers and students are often limited in their statistical knowledge by the information presented in their classroom textbook. Definitions and descriptions differ among textbooks as well as among different…

  7. Accession Medical Standards Analysis and Research Activity (AMSARA) 2014, Annual Report, and four Supplemental Applicants and Accessions Tables for: Army, Air Force, Marine, and Navy

    DTIC Science & Technology

    2016-02-02

    23 Descriptive Statistics for Enlisted Service Applicants and Accessions...33 Summary Statistics for Applicants and Accessions for Enlisted Service ..................................... 36 Applicants and...utilization among Soldiers screened using TAPAS. Section 2 of this report includes the descriptive statistics AMSARA compiles and publishes

  8. Statistical methods used in the public health literature and implications for training of public health professionals

    PubMed Central

    Hayat, Matthew J.; Powell, Amanda; Johnson, Tessa; Cadwell, Betsy L.

    2017-01-01

    Statistical literacy and knowledge is needed to read and understand the public health literature. The purpose of this study was to quantify basic and advanced statistical methods used in public health research. We randomly sampled 216 published articles from seven top tier general public health journals. Studies were reviewed by two readers and a standardized data collection form completed for each article. Data were analyzed with descriptive statistics and frequency distributions. Results were summarized for statistical methods used in the literature, including descriptive and inferential statistics, modeling, advanced statistical techniques, and statistical software used. Approximately 81.9% of articles reported an observational study design and 93.1% of articles were substantively focused. Descriptive statistics in table or graphical form were reported in more than 95% of the articles, and statistical inference reported in more than 76% of the studies reviewed. These results reveal the types of statistical methods currently used in the public health literature. Although this study did not obtain information on what should be taught, information on statistical methods being used is useful for curriculum development in graduate health sciences education, as well as making informed decisions about continuing education for public health professionals. PMID:28591190

  9. Statistical methods used in the public health literature and implications for training of public health professionals.

    PubMed

    Hayat, Matthew J; Powell, Amanda; Johnson, Tessa; Cadwell, Betsy L

    2017-01-01

    Statistical literacy and knowledge is needed to read and understand the public health literature. The purpose of this study was to quantify basic and advanced statistical methods used in public health research. We randomly sampled 216 published articles from seven top tier general public health journals. Studies were reviewed by two readers and a standardized data collection form completed for each article. Data were analyzed with descriptive statistics and frequency distributions. Results were summarized for statistical methods used in the literature, including descriptive and inferential statistics, modeling, advanced statistical techniques, and statistical software used. Approximately 81.9% of articles reported an observational study design and 93.1% of articles were substantively focused. Descriptive statistics in table or graphical form were reported in more than 95% of the articles, and statistical inference reported in more than 76% of the studies reviewed. These results reveal the types of statistical methods currently used in the public health literature. Although this study did not obtain information on what should be taught, information on statistical methods being used is useful for curriculum development in graduate health sciences education, as well as making informed decisions about continuing education for public health professionals.

  10. First- and fifth-year medical students' intention for emigration and practice abroad: a case study of Serbia.

    PubMed

    Santric-Milicevic, Milena M; Terzic-Supic, Zorica J; Matejic, Bojana R; Vasic, Vladimir; Ricketts, Thomas C

    2014-11-01

    Health worker migration is causing profound health, safety, social, economic and political challenges to countries without special policies for health professionals' mobility. This study describes the prevalence of migration intentions among medical undergraduates, identifies underlying factors related to migration intention and describes subsequent actions in Serbia. Data were captured by survey of 938 medical students from Belgrade University (94% response rate), representing two thirds of matching students in Serbia stated their intentions, reasons and obstacles regarding work abroad. Statistical analyses included descriptive statistics and a sequential multivariate logistic regression. Based on descriptive and inferential statistics we were able to predict the profile of first and fifth year medical students who intend or have plans to work abroad. This study contributes to our understanding of the causes and correlates of intent to migrate and could serve to raise awareness and point to the valuable policy options to manage migration. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Logo image clustering based on advanced statistics

    NASA Astrophysics Data System (ADS)

    Wei, Yi; Kamel, Mohamed; He, Yiwei

    2007-11-01

    In recent years, there has been a growing interest in the research of image content description techniques. Among those, image clustering is one of the most frequently discussed topics. Similar to image recognition, image clustering is also a high-level representation technique. However it focuses on the coarse categorization rather than the accurate recognition. Based on wavelet transform (WT) and advanced statistics, the authors propose a novel approach that divides various shaped logo images into groups according to the external boundary of each logo image. Experimental results show that the presented method is accurate, fast and insensitive to defects.

  12. C-statistic fitting routines: User's manual and reference guide

    NASA Technical Reports Server (NTRS)

    Nousek, John A.; Farwana, Vida

    1991-01-01

    The computer program is discussed which can read several input files and provide a best set of values for the functions provided by the user, using either C-statistic or the chi(exp 2) statistic method. The program consists of one main routine and several functions and subroutines. Detail descriptions of each function and subroutine is presented. A brief description of the C-statistic and the reason for its application is also presented.

  13. Validating Future Force Performance Measures (Army Class): Concluding Analyses

    DTIC Science & Technology

    2016-06-01

    32 Table 3.10. Descriptive Statistics and Intercorrelations for LV Final Predictor Factor Scores...55 Table 4.7. Descriptive Statistics for Analysis Criteria...Soldier attrition and performance: Dependability (Non- Delinquency ), Adjustment, Physical Conditioning, Leadership, Work Orientation, and Agreeableness

  14. [Design and implementation of online statistical analysis function in information system of air pollution and health impact monitoring].

    PubMed

    Lü, Yiran; Hao, Shuxin; Zhang, Guoqing; Liu, Jie; Liu, Yue; Xu, Dongqun

    2018-01-01

    To implement the online statistical analysis function in information system of air pollution and health impact monitoring, and obtain the data analysis information real-time. Using the descriptive statistical method as well as time-series analysis and multivariate regression analysis, SQL language and visual tools to implement online statistical analysis based on database software. Generate basic statistical tables and summary tables of air pollution exposure and health impact data online; Generate tendency charts of each data part online and proceed interaction connecting to database; Generate butting sheets which can lead to R, SAS and SPSS directly online. The information system air pollution and health impact monitoring implements the statistical analysis function online, which can provide real-time analysis result to its users.

  15. Parental intention to support video game play by children with autism spectrum disorder: an application of the theory of planned behavior.

    PubMed

    Finke, Erinn H; Hickerson, Benjamin; McLaughlin, Eileen

    2015-04-01

    The purpose of this study was to determine parental attitudes regarding engagement with video games by their children with autism spectrum disorder (ASD) and whether attitudes vary based on ASD symptom severity. Online survey methodology was used to gather information from parents of children with ASD between the ages of 8 and 12 years. The finalized data set included 152 cases. Descriptive statistics and frequency analyses were used to examine participant demographics and video game play. Descriptive and inferential statistics were used to evaluate questions on the theory of planned behavior. Regression analyses determined the predictive ability of the theory of planned behavior constructs, and t tests provided additional descriptive information about between-group differences. Children with ASD play video games. There are no significant differences in the time, intensity, or types of games played based on severity of ASD symptoms (mild vs. moderate). Parents of children with ASD had positive attitudes about video game play. Parents of children with ASD appear to support video game play. On average, parents indicated video game play was positive for their children with ASD, particularly if they believed the games were having a positive impact on their child's development.

  16. Revisiting Personnel Utilization in Inclusion-Oriented Schools

    ERIC Educational Resources Information Center

    Giangreco, Michael F.; Suter, Jesse C.; Hurley, Sean M.

    2013-01-01

    Implementing research-based curricula and instruction in inclusion-oriented schools is helped or hindered by having coherent models of service delivery accounting for the full range of student diversity. The current investigation offers data from 174 participants in 32 schools, analyzed using descriptive statistics, correlation, and hierarchical…

  17. Chicago's Spanish-Speaking Population: Selected Statistics.

    ERIC Educational Resources Information Center

    Chicago Dept. of Development and Planning, IL.

    Based on selected data from the 1970 census, this report provides a general description of Chicago's Spanish-speaking population's: (1) general population characteristics; (2) age and family characteristics; (3) income; (4) labor force characteristics; (5) education; and (6) housing. Using the Census Bureau's definition of Spanish speaking (all…

  18. Predicting Subsequent Myopia in Initially Pilot-Qualified USAFA Cadets.

    DTIC Science & Technology

    1985-12-27

    Refraction Measurement 14 Accesion For . 4.0 RESULTS NTIS CRA&I 15 4.1 Descriptive Statistics DTIC TAB 0 15i ~ ~Unannoutwced [ 4.2 Predictive Statistics ...mentioned), and three were missing a status. The data of the subject who was commissionable were dropped from the statistical analyses. Of the 91...relatively equal numbers of participants from all classes will become obvious ’’" - within the results. J 4.1 Descriptive Statistics In the original plan

  19. Analysis of statistical misconception in terms of statistical reasoning

    NASA Astrophysics Data System (ADS)

    Maryati, I.; Priatna, N.

    2018-05-01

    Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.

  20. Evaluation of wind field statistics near and inside clouds using a coherent Doppler lidar

    NASA Astrophysics Data System (ADS)

    Lottman, Brian Todd

    1998-09-01

    This work proposes advanced techniques for measuring the spatial wind field statistics near and inside clouds using a vertically pointing solid state coherent Doppler lidar on a fixed ground based platform. The coherent Doppler lidar is an ideal instrument for high spatial and temporal resolution velocity estimates. The basic parameters of lidar are discussed, including a complete statistical description of the Doppler lidar signal. This description is extended to cases with simple functional forms for aerosol backscatter and velocity. An estimate for the mean velocity over a sensing volume is produced by estimating the mean spectra. There are many traditional spectral estimators, which are useful for conditions with slowly varying velocity and backscatter. A new class of estimators (novel) is introduced that produces reliable velocity estimates for conditions with large variations in aerosol backscatter and velocity with range, such as cloud conditions. Performance of traditional and novel estimators is computed for a variety of deterministic atmospheric conditions using computer simulated data. Wind field statistics are produced for actual data for a cloud deck, and for multi- layer clouds. Unique results include detection of possible spectral signatures for rain, estimates for the structure function inside a cloud deck, reliable velocity estimation techniques near and inside thin clouds, and estimates for simple wind field statistics between cloud layers.

  1. Job Satisfaction DEOCS 4.1 Construct Validity Summary

    DTIC Science & Technology

    2017-08-01

    focuses more specifically on satisfaction with the job. Included is a review of the 4.0 description and items, followed by the proposed modifications to...the factor. The DEOCS 4.0 description provided for job satisfaction is “the perception of personal fulfillment in a specific vocation, and sense of...piloting items on the DEOCS; (4) examining the descriptive statistics, exploratory factor analysis results, and aggregation statistics; and (5

  2. Software for Statistical Analysis of Weibull Distributions with Application to Gear Fatigue Data: User Manual with Verification

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2002-01-01

    The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type 1 censoring. The software was verified by reproducing results published by others.

  3. Statistical Method Based on Confidence and Prediction Regions for Analysis of Volatile Organic Compounds in Human Breath Gas

    NASA Astrophysics Data System (ADS)

    Wimmer, G.

    2008-01-01

    In this paper we introduce two confidence and two prediction regions for statistical characterization of concentration measurements of product ions in order to discriminate various groups of persons for prospective better detection of primary lung cancer. Two MATLAB algorithms have been created for more adequate description of concentration measurements of volatile organic compounds in human breath gas for potential detection of primary lung cancer and for evaluation of the appropriate confidence and prediction regions.

  4. Software for Statistical Analysis of Weibull Distributions with Application to Gear Fatigue Data: User Manual with Verification

    NASA Technical Reports Server (NTRS)

    Kranz, Timothy L.

    2002-01-01

    The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type I censoring. The software was verified by reproducing results published by others.

  5. The Development of Statistics Textbook Supported with ICT and Portfolio-Based Assessment

    NASA Astrophysics Data System (ADS)

    Hendikawati, Putriaji; Yuni Arini, Florentina

    2016-02-01

    This research was development research that aimed to develop and produce a Statistics textbook model that supported with information and communication technology (ICT) and Portfolio-Based Assessment. This book was designed for students of mathematics at the college to improve students’ ability in mathematical connection and communication. There were three stages in this research i.e. define, design, and develop. The textbooks consisted of 10 chapters which each chapter contains introduction, core materials and include examples and exercises. The textbook developed phase begins with the early stages of designed the book (draft 1) which then validated by experts. Revision of draft 1 produced draft 2 which then limited test for readability test book. Furthermore, revision of draft 2 produced textbook draft 3 which simulated on a small sample to produce a valid model textbook. The data were analysed with descriptive statistics. The analysis showed that the Statistics textbook model that supported with ICT and Portfolio-Based Assessment valid and fill up the criteria of practicality.

  6. Crop identification technology assessment for remote sensing. (CITARS) Volume 9: Statistical analysis of results

    NASA Technical Reports Server (NTRS)

    Davis, B. J.; Feiveson, A. H.

    1975-01-01

    Results are presented of CITARS data processing in raw form. Tables of descriptive statistics are given along with descriptions and results of inferential analyses. The inferential results are organized by questions which CITARS was designed to answer.

  7. 26 CFR 6a.103A-2 - Qualified mortgage bond.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... CFR 570.452, by the Secretary of Housing and Urban Development. (E) Statistical and descriptive.... Settlement costs include titling and transfer costs, title insurance, survey fees, or other similar costs... actually $43,000. Such determination is based on a comprehensive survey of residential housing sales in the...

  8. 26 CFR 6a.103A-2 - Qualified mortgage bond.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... CFR 570.452, by the Secretary of Housing and Urban Development. (E) Statistical and descriptive.... Settlement costs include titling and transfer costs, title insurance, survey fees, or other similar costs... actually $43,000. Such determination is based on a comprehensive survey of residential housing sales in the...

  9. 26 CFR 6a.103A-2 - Qualified mortgage bond.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... CFR 570.452, by the Secretary of Housing and Urban Development. (E) Statistical and descriptive.... Settlement costs include titling and transfer costs, title insurance, survey fees, or other similar costs... actually $43,000. Such determination is based on a comprehensive survey of residential housing sales in the...

  10. 26 CFR 6a.103A-2 - Qualified mortgage bond.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... CFR 570.452, by the Secretary of Housing and Urban Development. (E) Statistical and descriptive.... Settlement costs include titling and transfer costs, title insurance, survey fees, or other similar costs... actually $43,000. Such determination is based on a comprehensive survey of residential housing sales in the...

  11. 26 CFR 6a.103A-2 - Qualified mortgage bond.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... CFR 570.452, by the Secretary of Housing and Urban Development. (E) Statistical and descriptive.... Settlement costs include titling and transfer costs, title insurance, survey fees, or other similar costs... actually $43,000. Such determination is based on a comprehensive survey of residential housing sales in the...

  12. Faculty Perceptions of Transition Personnel Preparation in Saudi Arabia

    ERIC Educational Resources Information Center

    Alhossan, Bandar A.; Trainor, Audrey A.

    2017-01-01

    This study investigated to what extent faculty members include and value transition curricula in special education preparation programs in Saudi Arabia. A web-based survey was conducted and sent to special education professors across 20 universities. Descriptive statistics and a t-test analysis generated three main findings: (a) Institutions…

  13. The best motivator priorities parents choose via analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Farah, R. N.; Latha, P.

    2015-05-01

    Motivation is probably the most important factor that educators can target in order to improve learning. Numerous cross-disciplinary theories have been postulated to explain motivation. While each of these theories has some truth, no single theory seems to adequately explain all human motivation. The fact is that human beings in general and pupils in particular are complex creatures with complex needs and desires. In this paper, Analytic Hierarchy Process (AHP) has been proposed as an emerging solution to move towards too large, dynamic and complex real world multi-criteria decision making problems in selecting the most suitable motivator when choosing school for their children. Data were analyzed using SPSS 17.0 ("Statistical Package for Social Science") software. Statistic testing used are descriptive and inferential statistic. Descriptive statistic used to identify respondent pupils and parents demographic factors. The statistical testing used to determine the pupils and parents highest motivator priorities and parents' best priorities using AHP to determine the criteria chosen by parents such as school principals, teachers, pupils and parents. The moderating factors are selected schools based on "Standard Kualiti Pendidikan Malaysia" (SKPM) in Ampang. Inferential statistics such as One-way ANOVA used to get the significant and data used to calculate the weightage of AHP. School principals is found to be the best motivator for parents in choosing school for their pupils followed by teachers, parents and pupils.

  14. Prison Radicalization: The New Extremist Training Grounds?

    DTIC Science & Technology

    2007-09-01

    distributing and collecting survey data , and the data analysis. The analytical methodology includes descriptive and inferential statistical methods, in... statistical analysis of the responses to identify significant correlations and relationships. B. SURVEY DATA COLLECTION To effectively access a...Q18, Q19, Q20, and Q21. Due to the exploratory nature of this small survey, data analyses were confined mostly to descriptive statistics and

  15. Truth, models, model sets, AIC, and multimodel inference: a Bayesian perspective

    USGS Publications Warehouse

    Barker, Richard J.; Link, William A.

    2015-01-01

    Statistical inference begins with viewing data as realizations of stochastic processes. Mathematical models provide partial descriptions of these processes; inference is the process of using the data to obtain a more complete description of the stochastic processes. Wildlife and ecological scientists have become increasingly concerned with the conditional nature of model-based inference: what if the model is wrong? Over the last 2 decades, Akaike's Information Criterion (AIC) has been widely and increasingly used in wildlife statistics for 2 related purposes, first for model choice and second to quantify model uncertainty. We argue that for the second of these purposes, the Bayesian paradigm provides the natural framework for describing uncertainty associated with model choice and provides the most easily communicated basis for model weighting. Moreover, Bayesian arguments provide the sole justification for interpreting model weights (including AIC weights) as coherent (mathematically self consistent) model probabilities. This interpretation requires treating the model as an exact description of the data-generating mechanism. We discuss the implications of this assumption, and conclude that more emphasis is needed on model checking to provide confidence in the quality of inference.

  16. Impacts of Maximizing Tendencies on Experience-Based Decisions.

    PubMed

    Rim, Hye Bin

    2017-06-01

    Previous research on risky decisions has suggested that people tend to make different choices depending on whether they acquire the information from personally repeated experiences or from statistical summary descriptions. This phenomenon, called as a description-experience gap, was expected to be moderated by the individual difference in maximizing tendencies, a desire towards maximizing decisional outcome. Specifically, it was hypothesized that maximizers' willingness to engage in extensive information searching would lead maximizers to make experience-based decisions as payoff distributions were given explicitly. A total of 262 participants completed four decision problems. Results showed that maximizers, compared to non-maximizers, drew more samples before making a choice but reported lower confidence levels on both the accuracy of knowledge gained from experiences and the likelihood of satisfactory outcomes. Additionally, maximizers exhibited smaller description-experience gaps than non-maximizers as expected. The implications of the findings and unanswered questions for future research were discussed.

  17. Probabilistic models for reactive behaviour in heterogeneous condensed phase media

    NASA Astrophysics Data System (ADS)

    Baer, M. R.; Gartling, D. K.; DesJardin, P. E.

    2012-02-01

    This work presents statistically-based models to describe reactive behaviour in heterogeneous energetic materials. Mesoscale effects are incorporated in continuum-level reactive flow descriptions using probability density functions (pdfs) that are associated with thermodynamic and mechanical states. A generalised approach is presented that includes multimaterial behaviour by treating the volume fraction as a random kinematic variable. Model simplifications are then sought to reduce the complexity of the description without compromising the statistical approach. Reactive behaviour is first considered for non-deformable media having a random temperature field as an initial state. A pdf transport relationship is derived and an approximate moment approach is incorporated in finite element analysis to model an example application whereby a heated fragment impacts a reactive heterogeneous material which leads to a delayed cook-off event. Modelling is then extended to include deformation effects associated with shock loading of a heterogeneous medium whereby random variables of strain, strain-rate and temperature are considered. A demonstrative mesoscale simulation of a non-ideal explosive is discussed that illustrates the joint statistical nature of the strain and temperature fields during shock loading to motivate the probabilistic approach. This modelling is derived in a Lagrangian framework that can be incorporated in continuum-level shock physics analysis. Future work will consider particle-based methods for a numerical implementation of this modelling approach.

  18. A method to reconstruct long precipitation series using systematic descriptive observations in weather diaries: the example of the precipitation series for Bern, Switzerland (1760-2003)

    NASA Astrophysics Data System (ADS)

    Gimmi, U.; Luterbacher, J.; Pfister, C.; Wanner, H.

    2007-01-01

    In contrast to barometric and thermometric records, early instrumental precipitation series are quite rare. Based on systematic descriptive daily records, a quantitative monthly precipitation series for Bern (Switzerland) was reconstructed back to the year 1760 (reconstruction based on documentary evidence). Since every observer had his own personal style to fill out his diary, the main focus was to avoid observer-specific bias in the reconstruction. An independent statistical monthly precipitation reconstruction was performed using instrumental data from European sites. Over most periods the reconstruction based on documentary evidence lies inside the 2 standard errors of the statistical estimates. The comparison between these two approaches enables an independent verification and a reliable error estimate. The analysis points to below normal rainfall totals in all seasons during the late 18th century and in the 1820s and 1830s. Increased precipitation occurred in the early 1850s and the late 1870s, particularly from spring to autumn. The annual precipitation totals generally tend to be higher in the 20th century than in the late 18th and 19th century. Precipitation changes are discussed in the context of socioeconomic impacts and Alpine glacier dynamics. The conceptual design of the reconstruction procedure is aimed at application for similar descriptive precipitation series, which are known to be abundant from the mid-18th century in Europe and the U.S.

  19. Anderson Localization in Quark-Gluon Plasma

    NASA Astrophysics Data System (ADS)

    Kovács, Tamás G.; Pittler, Ferenc

    2010-11-01

    At low temperature the low end of the QCD Dirac spectrum is well described by chiral random matrix theory. In contrast, at high temperature there is no similar statistical description of the spectrum. We show that at high temperature the lowest part of the spectrum consists of a band of statistically uncorrelated eigenvalues obeying essentially Poisson statistics and the corresponding eigenvectors are extremely localized. Going up in the spectrum the spectral density rapidly increases and the eigenvectors become more and more delocalized. At the same time the spectral statistics gradually crosses over to the bulk statistics expected from the corresponding random matrix ensemble. This phenomenon is reminiscent of Anderson localization in disordered conductors. Our findings are based on staggered Dirac spectra in quenched lattice simulations with the SU(2) gauge group.

  20. Zubarev's Nonequilibrium Statistical Operator Method in the Generalized Statistics of Multiparticle Systems

    NASA Astrophysics Data System (ADS)

    Glushak, P. A.; Markiv, B. B.; Tokarchuk, M. V.

    2018-01-01

    We present a generalization of Zubarev's nonequilibrium statistical operator method based on the principle of maximum Renyi entropy. In the framework of this approach, we obtain transport equations for the basic set of parameters of the reduced description of nonequilibrium processes in a classical system of interacting particles using Liouville equations with fractional derivatives. For a classical systems of particles in a medium with a fractal structure, we obtain a non-Markovian diffusion equation with fractional spatial derivatives. For a concrete model of the frequency dependence of a memory function, we obtain generalized Kettano-type diffusion equation with the spatial and temporal fractality taken into account. We present a generalization of nonequilibrium thermofield dynamics in Zubarev's nonequilibrium statistical operator method in the framework of Renyi statistics.

  1. Rebuilding Government Legitimacy in Post-conflict Societies: Case Studies of Nepal and Afghanistan

    DTIC Science & Technology

    2015-09-09

    administered via the verbal scales due to reduced time spent explaining the visual show cards. Statistical results corresponded with observations from...a three-step strategy for dealing with item non-response. First, basic descriptive statistics are calculated to determine the extent of item...descriptive statistics for all items in the survey), however this section of the report highlights just some of the findings. Thus, the results

  2. Applying Descriptive Statistics to Teaching the Regional Classification of Climate.

    ERIC Educational Resources Information Center

    Lindquist, Peter S.; Hammel, Daniel J.

    1998-01-01

    Describes an exercise for college and high school students that relates descriptive statistics to the regional climatic classification. The exercise introduces students to simple calculations of central tendency and dispersion, the construction and interpretation of scatterplots, and the definition of climatic regions. Forces students to engage…

  3. Capturing rogue waves by multi-point statistics

    NASA Astrophysics Data System (ADS)

    Hadjihosseini, A.; Wächter, Matthias; Hoffmann, N. P.; Peinke, J.

    2016-01-01

    As an example of a complex system with extreme events, we investigate ocean wave states exhibiting rogue waves. We present a statistical method of data analysis based on multi-point statistics which for the first time allows the grasping of extreme rogue wave events in a highly satisfactory statistical manner. The key to the success of the approach is mapping the complexity of multi-point data onto the statistics of hierarchically ordered height increments for different time scales, for which we can show that a stochastic cascade process with Markov properties is governed by a Fokker-Planck equation. Conditional probabilities as well as the Fokker-Planck equation itself can be estimated directly from the available observational data. With this stochastic description surrogate data sets can in turn be generated, which makes it possible to work out arbitrary statistical features of the complex sea state in general, and extreme rogue wave events in particular. The results also open up new perspectives for forecasting the occurrence probability of extreme rogue wave events, and even for forecasting the occurrence of individual rogue waves based on precursory dynamics.

  4. College Choice in America.

    ERIC Educational Resources Information Center

    Manski, Charles F.; And Others

    The processes of choosing a college and being accepted by a college are analyzed, based on data on nearly 23,000 seniors from more than 1,300 high schools from the National Longitudinal Study of the Class of 1972. Econometric modeling and descriptive statistics are provided on: student behavior in selecting a college, choosing school/nonschool…

  5. Professional Development, Promotion, and Pay Differences Between Women and Men in Educational Psychology.

    ERIC Educational Resources Information Center

    Ekstrom, Ruth B.

    A questionnaire about graduate school and professional experiences was completed by 235 white females, 10 minority females, 198 white males, and 12 minority males who hold the doctoral degree and are members of the American Psychological Association, Division of Educational Psychology. Tentative findings, based on simple descriptive statistics,…

  6. Influence of Motivation Theory and Supplemental Workshops on First-Time Passing Rates of HBCU Teacher Candidates

    ERIC Educational Resources Information Center

    Moffett, Noran L.; Frizzell, Melanie M.; Brownlee-Williams, Yolanda; Thompson, Jill M.

    2014-01-01

    The action research methodology for this study reports descriptive statistical findings from the performance of 19 Early Childhood Education African American teacher candidates matriculating through a state-approved program at an HBCU. Researcher-moderators provided a treatment plan of focused summer workshops, conceptualized based upon the…

  7. Preparing Occupational Therapy Students to Address Mental Health Promotion, Prevention, and Intervention in School-Based Practice

    ERIC Educational Resources Information Center

    Blackwell, Cindy DeRuiter; Bilics, Andrea

    2018-01-01

    Directors of entry-level occupational therapy (OT) programs were surveyed regarding how their programs prepare students to become mental health practitioners in schools. Analysis of quantitative data included descriptive statistics to examine participants' ratings of their program's ability to prepare students for mental health practice. We found…

  8. Technology Integration in K-12 Science Classrooms: An Analysis of Barriers and Implications

    ERIC Educational Resources Information Center

    Hechter, Richard P.; Vermette, Laurie Anne

    2013-01-01

    This paper examines the barriers to technology integration for Manitoban K-12 inservice science educators (n = 430) based on a 10-item online survey; results are analyzed according to teaching stream using the Technology, Pedagogy, and Content Knowledge (TPACK) framework. Quantitative descriptive statistics indicated that the leading barriers…

  9. Blastopathies and microcephaly in a Chornobyl impacted region of Ukraine

    PubMed Central

    Wertelecki, Wladimir; Yevtushok, Lyubov; Zymak-Zakutnia, Natalia; Wang, Bin; Sosyniuk, Zoriana; Lapchenko, Serhiy; Hobart, Holly H

    2014-01-01

    This population-based descriptive epidemiology study demonstrates that rates of conjoined twins, teratomas, neural tube defects, microcephaly, and microphthalmia in the Rivne province of Ukraine are among the highest in Europe. The province is 200 km distant from the Chornobyl site and its northern half, a region known as Polissia, is significantly polluted by ionizing radiation. The rates of neural tube defects, microcephaly and microphthalmia in Polissia are statistically significantly higher than in the rest of the province. A survey of at-birth head size showed that values were statistically smaller in males and females born in one Polissia county than among neonates born in the capital city. These observations provide clues for confirmatory and cause-effect prospective investigations. The strength of this study stems from a reliance on international standards prevalent in Europe and a decade-long population-based surveillance of congenital malformations in two distinct large populations. The limitations of this study, as those of other descriptive epidemiology investigations, is that identified cause-effect associations require further assessment by specific prospective investigations designed to address specific teratogenic factors. PMID:24666273

  10. Agent based reasoning for the non-linear stochastic models of long-range memory

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Gontis, V.

    2012-02-01

    We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.

  11. A Description of the Building Materials Data Base for Portland, Maine.

    DTIC Science & Technology

    1986-06-01

    WORDS (Continue on reveree side if neceseary and Identify by block number)". Acid precipitation, , Data bases, Damage assessment, Environmental...protection) Damage from acid deposition, Portland, Maine Damage to buildings, - Statistical analysis, . 20. ASsrRACT (Conlaue a reverse e(A It n -cwery md...types and amounts of building surface materials ex- posed to acid deposition. The stratified, systematic, unaligned random sampling approach was used

  12. Technology-based counseling in the management of weight and lifestyles of obese or overweight children and adolescents: A descriptive systematic literature review.

    PubMed

    Kaakinen, Pirjo; Kyngäs, Helvi; Kääriäinen, Maria

    2018-03-01

    The number of overweight and obese children and adolescents has increased worldwide. Obese children and adolescents need counseling interventions, including technology-based methods, to help them manage their weight by changing their lifestyles. To describe technology-based counseling interventions in supporting obese or overweight children and adolescents to change their weight/lifestyle. Descriptive systematic literature review. A literature search was conducted using Cinahl, Medline, PsycINFO, and Medic databases in September 2010 and updated in January 2015. Predefined inclusion criteria were used for the search. After a quality assessment, 28 studies were included in the data extraction. No statistically significant difference in BMI was detected between the intervention and control groups. However, in some studies, it was found that BMI decreases and there were statistically significant differences in fruit and vegetable consumption. In two studies, differences in physical activity were detected between the intervention and control groups, but in eight studies, the difference was not significant. Goal setting and feedback on progress support physical activity and changes in diet. This study identifies available technology interventions for obese or overweight children and adolescents. It seems that using technology-based counseling intervention may encourage obese and overweight children and adolescents to pursue a healthier lifestyle.

  13. Statistics in three biomedical journals.

    PubMed

    Pilcík, T

    2003-01-01

    In this paper we analyze the use of statistics and associated problems, in three Czech biological journals in the year 2000. We investigated 23 articles Folia Biologica, 60 articles in Folia Microbiologica, and 88 articles in Physiological Research. The highest frequency of publications with statistical content have used descriptive statistics and t-test. The most usual mistake concerns the absence of reference about the used statistical software and insufficient description of the data. We have compared our results with the results of similar studies in some other medical journals. The use of important statistical methods is comparable with those used in most medical journals, the proportion of articles, in which the applied method is described insufficiently is moderately low.

  14. Unlawful Discrimination DEOCS 4.1 Construct Validity Summary

    DTIC Science & Technology

    2017-08-01

    Included is a review of the 4.0 description and items, followed by the proposed modifications to the factor. The current DEOCS (4.0) contains multiple...Officer (E7 – E9) 586 10.8% Junior Officer (O1 – O3) 474 9% Senior Officer (O4 and above) 391 6.1% Descriptive Statistics and Reliability This section...displays descriptive statistics for the items on the Unlawful Discrimination scale. All items had a range from 1 to 7 (strongly disagree to strongly

  15. Predictive data modeling of human type II diabetes related statistics

    NASA Astrophysics Data System (ADS)

    Jaenisch, Kristina L.; Jaenisch, Holger M.; Handley, James W.; Albritton, Nathaniel G.

    2009-04-01

    During the course of routine Type II treatment of one of the authors, it was decided to derive predictive analytical Data Models of the daily sampled vital statistics: namely weight, blood pressure, and blood sugar, to determine if the covariance among the observed variables could yield a descriptive equation based model, or better still, a predictive analytical model that could forecast the expected future trend of the variables and possibly eliminate the number of finger stickings required to montior blood sugar levels. The personal history and analysis with resulting models are presented.

  16. A Stochastic Fractional Dynamics Model of Space-time Variability of Rain

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Travis, James E.

    2013-01-01

    Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment.

  17. Using Microsoft Excel[R] to Calculate Descriptive Statistics and Create Graphs

    ERIC Educational Resources Information Center

    Carr, Nathan T.

    2008-01-01

    Descriptive statistics and appropriate visual representations of scores are important for all test developers, whether they are experienced testers working on large-scale projects, or novices working on small-scale local tests. Many teachers put in charge of testing projects do not know "why" they are important, however, and are utterly convinced…

  18. Self-Esteem and Academic Achievement of High School Students

    ERIC Educational Resources Information Center

    Moradi Sheykhjan, Tohid; Jabari, Kamran; Rajeswari, K.

    2014-01-01

    The primary purpose of this study was to determine the influence of self-esteem on academic achievement among high school students in Miandoab City of Iran. The methodology of the research is descriptive and correlation that descriptive and inferential statistics were used to analyze the data. Statistical Society includes male and female high…

  19. Role strain among male RNs in the critical care setting: Perceptions of an unfriendly workplace.

    PubMed

    Carte, Nicholas S; Williams, Collette

    2017-12-01

    Traditionally, nursing has been a female-dominated profession. Men employed as registered nurses have been in the minority and little is known about the experiences of this demographic. The purpose of this descriptive, quantitative study was to understand the relationship between the variables of demographics and causes of role strain among male nurses in critical care settings. The Sherrod Role Strain Scale assesses role strain within the context of role conflict, role overload, role ambiguity and role incongruity. Data analysis of the results included descriptive and inferential statistics. Inferential statistics involved the use of repeated measures ANOVA testing for significant difference in the causes of role strain between male nurses employed in critical care settings and a post hoc comparison of specific demographic data using multivariate analyses of variance (MANOVAs). Data from 37 male nurses in critical care settings from the northeast of the United States were used to calculate descriptive statistics standard deviation, mean of the data analysis and results of the repeated ANOVA and the post hoc secondary MANOVA analysis. The descriptive data showed that all participants worked full-time. There was an even split from those participants who worked day shift (46%) vs. night shift (43%), most the participants indicated they had 15 years or more experience as an registered nurse (54%). Significant findings of this study include two causes of role strain in male nurses employed in critical care settings which are: role ambiguity and role overload based on ethnicity. Consistent with previous research findings, the results of this study suggest that male registered nurses employed in critical care settings do experience role strain. The two main causes of role strain in male nurses are role ambiguity and role overload. Copyright © 2017. Published by Elsevier Ltd.

  20. Probability of identification: a statistical model for the validation of qualitative botanical identification methods.

    PubMed

    LaBudde, Robert A; Harnly, James M

    2012-01-01

    A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given.

  1. Optimal combining of ground-based sensors for the purpose of validating satellite-based rainfall estimates

    NASA Technical Reports Server (NTRS)

    Krajewski, Witold F.; Rexroth, David T.; Kiriaki, Kiriakie

    1991-01-01

    Two problems related to radar rainfall estimation are described. The first part is a description of a preliminary data analysis for the purpose of statistical estimation of rainfall from multiple (radar and raingage) sensors. Raingage, radar, and joint radar-raingage estimation is described, and some results are given. Statistical parameters of rainfall spatial dependence are calculated and discussed in the context of optimal estimation. Quality control of radar data is also described. The second part describes radar scattering by ellipsoidal raindrops. An analytical solution is derived for the Rayleigh scattering regime. Single and volume scattering are presented. Comparison calculations with the known results for spheres and oblate spheroids are shown.

  2. Description of 'REQUEST-KYUSHYU' for KYUKEICHO regional data base

    NASA Astrophysics Data System (ADS)

    Takimoto, Shin'ichi

    Kyushu Economic Research Association (a foundational juridical person) initiated the regional database services, ' REQUEST-Kyushu ' recently. It is the full scale databases compiled based on the information and know-hows which the Association has accumulated over forty years. It covers the regional information database for journal and newspaper articles, and statistical information database for economic statistics. As to the former database it is searched on a personal computer and then a search result (original text) is sent through a facsimile. As to the latter, it is also searched on a personal computer where the data is processed, edited or downloaded. This paper describes characteristics, content and the system outline of 'REQUEST-Kyushu'.

  3. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    PubMed Central

    Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario

    2014-01-01

    Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565

  4. Beliefs and implementation of evidence-based practice among community health nurses: A cross-sectional descriptive study.

    PubMed

    Pereira, Filipa; Pellaux, Victoria; Verloo, Henk

    2018-03-08

    To describe beliefs about evidence-based practice and record levels of implementation among community health nurses working independently and in community healthcare centres in the canton of Valais, Switzerland. In many settings, evidence-based practice is considered a key means of delivering better and secure health care. However, there is a paucity of published studies on the implementation of evidence-based practice in community health care. Cross-sectional descriptive study (n = 100). Beliefs about evidence-based practice and levels of implementation were measured using validated scales developed by Melnyk et al. (Worldviews on Evidence-Based Nursing, 5, 2008, 208). Information on respondents' sociodemographic and professional characteristics was collected. Data were analysed using descriptive and inferential statistics. The final response rate was 32.3% (n = 100). More than half of respondents had previously heard about evidence-based practice; most believed in the value of using evidence to guide their practice and were prepared to improve their skills to be able to do so. However, the rate of implementation of evidence-based practice in daily practice in the 8 weeks before the survey was poor. Statistically significant positive associations were found between beliefs about evidence-based practice and how respondents had heard about it and between implementation rates and whether they had heard about evidence-based practice and how they had done so. Evidence-based practices requiring scientific knowledge and skills were implemented less frequently. Greater professional community healthcare experience and management roles did not increase implementation of evidence-based practice. The systematic implementation of evidence-based practice by community health nurses working independently and in healthcare centres in Valais was rare, despite their positive beliefs about it. These results revealed the level of implementation of evidence-based practice by nurses in community healthcare settings in Valais. Further research is required to better understand their needs and expectations and to develop suitable strategies that will allow the integration of evidence-based practice into nurses' daily practice. © 2018 The Authors Journal of Clinical Nursing Published by John Wiley & Sons Ltd.

  5. General Aviation Activity and Avionics Survey. 1978

    DTIC Science & Technology

    1980-03-01

    Sponsoing AencyCode Washington DC 20591 AS/220 _ 15. Supplementary Notes 16. Ab xtrect This report presents the results and a description of the 1978...the United States registered general aviation aircraft fleet, the dominant component of civil aviation in the U.S. The survey was based on a ...statistically selected sample of about 13.3 percent of the general aviation fleet and obtained a response rate of 74 percent. Survey results are based upon

  6. Federal Policies and Higher Education in the United States.

    ERIC Educational Resources Information Center

    Prisco, Anne; Hurley, Alicia D.; Carton, Thomas C; Richardson, Richard C., Jr.

    The purpose of this report is to describe U.S. federal policies that have helped to shape the context within which state systems of higher education operated during the past decade. It also presents descriptive statistics about the higher education enterprise in the United States, including available performance data. The report is based on the…

  7. Building a Performance-Based Assessment System To Diagnose Strengths and Weaknesses in Reading Achievement.

    ERIC Educational Resources Information Center

    Hennings, Sara S.; Hughes, Kay E.

    This paper provides a brief description of the development of the Diagnostic Assessments of Reading with Trial Teaching Strategies (DARTTS) program by F. G. Roswell and J. S. Chall. It also describes the editorial and statistical procedures that were used to validate the program for determining students' strengths and weaknesses in important areas…

  8. National 4-H Common Measures: Initial Evaluation from California 4-H

    ERIC Educational Resources Information Center

    Lewis, Kendra M.; Horrillo, Shannon J.; Widaman, Keith; Worker, Steven M.; Trzesniewski, Kali

    2015-01-01

    Evaluation is a key component to learning about the effectiveness of a program. This article provides descriptive statistics of the newly developed National 4-H Common Measures (science, healthy living, citizenship, and youth development) based on data from 721 California 4-H youth. The measures were evaluated for their reliability and validity of…

  9. The Influence of Mathematics Professional Development, School-Level, and Teacher-Level Variables on Primary Students' Mathematics Achievement

    ERIC Educational Resources Information Center

    Polly, Drew; Wang, Chuang; Martin, Christie; Lambert, Richard; Pugalee, David; Middleton, Catherina

    2018-01-01

    This study examined the influence of a professional development project about an internet-based mathematics formative assessment tool and related pedagogies on primary teachers' instruction and student achievement. Teachers participated in 72 h of professional development during the year. Descriptive statistics and multivariate analyses of…

  10. Rasch Based Analysis of Oral Proficiency Test Data.

    ERIC Educational Resources Information Center

    Nakamura, Yuji

    2001-01-01

    This paper examines the rating scale data of oral proficiency tests analyzed by a Rasch Analysis focusing on an item map and factor analysis. In discussing the item map, the difficulty order of six items and students' answering patterns are analyzed using descriptive statistics and measures of central tendency of test scores. The data ranks the…

  11. Fourier Descriptor Analysis and Unification of Voice Range Profile Contours: Method and Applications

    ERIC Educational Resources Information Center

    Pabon, Peter; Ternstrom, Sten; Lamarche, Anick

    2011-01-01

    Purpose: To describe a method for unified description, statistical modeling, and comparison of voice range profile (VRP) contours, even from diverse sources. Method: A morphologic modeling technique, which is based on Fourier descriptors (FDs), is applied to the VRP contour. The technique, which essentially involves resampling of the curve of the…

  12. Women's Liberation Scale (WLS): A Measure of Attitudes Toward Positions Advocated by Women's Groups.

    ERIC Educational Resources Information Center

    Goldberg, Carlos

    The Women's Liberation Scale (WLS) is a 14-item, Likert-type scale designed to measure attitudes toward positions advocated by women's groups. The WLS and its four-alternative response schema is presented, along with descriptive statistics of scores based on male and female college samples. Reliability and validity measures are reported, and the…

  13. Training Needs Assessment of Technical Skills in Managers of Tehran Electricity Distribution Company

    ERIC Educational Resources Information Center

    Koohi, Amir Hasan; Ghandali, Fatemeh; Dehghan, Hasan; Ghandali, Najme

    2016-01-01

    Current dissertation has been conducted in order to investigate and detect training needs of the mangers (top and middle) in Tehran Electricity Distribution Company. Research method is applied kind based on its purpose. Due to data collection method, this query is descriptive-survey type. Statistical population in this study is all of managers in…

  14. Statistical description and transport in stochastic magnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanden Eijnden, E.; Balescu, R.

    1996-03-01

    The statistical description of particle motion in a stochastic magnetic field is presented. Starting form the stochastic Liouville equation (or, hybrid kinetic equation) associated with the equations of motion of a test particle, the probability distribution function of the system is obtained for various magnetic fields and collisional processes. The influence of these two ingredients on the statistics of the particle dynamics is stressed. In all cases, transport properties of the system are discussed. {copyright} {ital 1996 American Institute of Physics.}

  15. A Stochastic Fractional Dynamics Model of Rainfall Statistics

    NASA Astrophysics Data System (ADS)

    Kundu, Prasun; Travis, James

    2013-04-01

    Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.

  16. Stress among Academic Staff and Students' Satisfaction of Their Performances in Payame Noor University of Miandoab

    ERIC Educational Resources Information Center

    Jabari, Kamran; Moradi Sheykhjan, Tohid

    2015-01-01

    Present study examined the relationship between stress among academic staff and students' satisfaction of their performances in Payame Noor University (PNU) of Miandoab City, Iran in 2014. The methodology of the research is descriptive and correlation that descriptive and inferential statistics were used to analyze the data. Statistical Society…

  17. Quantitative Methods in Library and Information Science Literature: Descriptive vs. Inferential Statistics.

    ERIC Educational Resources Information Center

    Brattin, Barbara C.

    Content analysis was performed on the top six core journals for 1990 in library and information science to determine the extent of research in the field. Articles (n=186) were examined for descriptive or inferential statistics and separately for the presence of mathematical models. Results show a marked (14%) increase in research for 1990,…

  18. What's in a Name? The Incorrect Use of Case Series as a Study Design Label in Studies Involving Dogs and Cats.

    PubMed

    Sargeant, J M; O'Connor, A M; Cullen, J N; Makielski, K M; Jones-Bitton, A

    2017-07-01

    Study design labels are used to identify relevant literature to address specific clinical and research questions and to aid in evaluating the evidentiary value of research. Evidence from the human healthcare literature indicates that the label "case series" may be used inconsistently and inappropriately. Our primary objective was to determine the proportion of studies in the canine and feline veterinary literature labeled as case series that actually corresponded to descriptive cohort studies, population-based cohort studies, or other study designs. Our secondary objective was to identify the proportion of case series in which potentially inappropriate inferential statements were made. Descriptive evaluation of published literature. One-hundred published studies (from 19 journals) labeled as case series. Studies were identified by a structured literature search, with random selection of 100 studies from the relevant citations. Two reviewers independently characterized each study, with disagreements resolved by consensus. Of the 100 studies, 16 were case series. The remaining studies were descriptive cohort studies (35), population-based cohort studies (36), or other observational or experimental study designs (13). Almost half (48.8%) of the case series or descriptive cohort studies, with no control group and no formal statistical analysis, included inferential statements about the efficacy of treatment or statistical significance of potential risk factors. Authors, peer-reviewers, and editors should carefully consider the design elements of a study to accurately identify and label the study design. Doing so will facilitate an understanding of the evidentiary value of the results. Copyright © 2017 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  19. Integrating the ACR Appropriateness Criteria Into the Radiology Clerkship: Comparison of Didactic Format and Group-Based Learning.

    PubMed

    Stein, Marjorie W; Frank, Susan J; Roberts, Jeffrey H; Finkelstein, Malka; Heo, Moonseong

    2016-05-01

    The aim of this study was to determine whether group-based or didactic teaching is more effective to teach ACR Appropriateness Criteria to medical students. An identical pretest, posttest, and delayed multiple-choice test was used to evaluate the efficacy of the two teaching methods. Descriptive statistics comparing test scores were obtained. On the posttest, the didactic group gained 12.5 points (P < .0001), and the group-based learning students gained 16.3 points (P < .0001). On the delayed test, the didactic group gained 14.4 points (P < .0001), and the group-based learning students gained 11.8 points (P < .001). The gains in scores on both tests were statistically significant for both groups. However, the differences in scores were not statistically significant comparing the two educational methods. Compared with didactic lectures, group-based learning is more enjoyable, time efficient, and equally efficacious. The choice of educational method can be individualized for each institution on the basis of group size, time constraints, and faculty availability. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  20. The Lake Tahoe Basin Land Use Simulation Model

    USGS Publications Warehouse

    Forney, William M.; Oldham, I. Benson

    2011-01-01

    This U.S. Geological Survey Open-File Report describes the final modeling product for the Tahoe Decision Support System project for the Lake Tahoe Basin funded by the Southern Nevada Public Land Management Act and the U.S. Geological Survey's Geographic Analysis and Monitoring Program. This research was conducted by the U.S. Geological Survey Western Geographic Science Center. The purpose of this report is to describe the basic elements of the novel Lake Tahoe Basin Land Use Simulation Model, publish samples of the data inputs, basic outputs of the model, and the details of the Python code. The results of this report include a basic description of the Land Use Simulation Model, descriptions and summary statistics of model inputs, two figures showing the graphical user interface from the web-based tool, samples of the two input files, seven tables of basic output results from the web-based tool and descriptions of their parameters, and the fully functional Python code.

  1. Findings From a Nursing Care Audit Based on the Nursing Process: A Descriptive Study.

    PubMed

    Poortaghi, Sarieh; Salsali, Mahvash; Ebadi, Abbas; Rahnavard, Zahra; Maleki, Farzaneh

    2015-09-01

    Although using the nursing process improves nursing care quality, few studies have evaluated nursing performance in accordance with nursing process steps either nationally or internationally. This study aimed to audit nursing care based on a nursing process model. This was a cross-sectional descriptive study in which a nursing audit checklist was designed and validated for assessing nurses' compliance with nursing process. A total of 300 nurses from various clinical settings of Tehran university of medical sciences were selected. Data were analyzed using descriptive and inferential statistics, including frequencies, Pearson correlation coefficient and independent samples t-tests. The compliance rate of nursing process indicators was 79.71 ± 0.87. Mean compliance scores did not significantly differ by education level and gender. However, overall compliance scores were correlated with nurses' age (r = 0.26, P = 0.001) and work experience (r = 0.273, P = 0.001). Nursing process indicators can be used to audit nursing care. Such audits can be used as quality assurance tools.

  2. Condylar positional changes in rapid maxillary expansion assessed with cone-beam computer tomography.

    PubMed

    McLeod, Lauren; Hernández, Ivonne A; Heo, Giseon; Lagravère, Manuel O

    2016-09-01

    The aim of this study was to determine the presence of condylar spatial changes in patients having rapid maxillary expansion treatments compared to a control group. Thirty-seven patients with maxillary transverse deficiency (11-17 years old) were randomly allocated into two groups (one treatment group - tooth borne expander [hyrax] - and one control group). Cone-beam computer tomographies (CBCT) were obtained from each patient at two time points (initial T1 and at removal of appliance at 6 months T2). CBCTs were analyzed using AVIZO software and landmarks were placed on the upper first molars and premolars, cranial base, condyles and glenoid fossa. Descriptive statistics, intraclass correlation coefficients and one-way Anova analysis were used to determine if there was a change in condyle position with respect to the glenoid fossa and cranial base and if there was a statistically significant difference between groups. Descriptive statistics show that changes in the condyle position with respect to the glenoid fossa were minor in both groups (<1.9mm average for both groups). The largest difference in both groups was found when measuring the distance between the left and right condyle heads. When comparing changes between both groups, no statistically significant difference was found between changes in the condyles (P<0.05). Rapid maxillary expansion treatments present mild effects/changes on the condylar position. Nevertheless, these changes do not present a significant difference with controls, thus not constituting a limitation for applying this treatment. Copyright © 2016 CEO. Published by Elsevier Masson SAS. All rights reserved.

  3. Theoretical approaches to the steady-state statistical physics of interacting dissipative units

    NASA Astrophysics Data System (ADS)

    Bertin, Eric

    2017-02-01

    The aim of this review is to provide a concise overview of some of the generic approaches that have been developed to deal with the statistical description of large systems of interacting dissipative ‘units’. The latter notion includes, e.g. inelastic grains, active or self-propelled particles, bubbles in a foam, low-dimensional dynamical systems like driven oscillators, or even spatially extended modes like Fourier modes of the velocity field in a fluid. We first review methods based on the statistical properties of a single unit, starting with elementary mean-field approximations, either static or dynamic, that describe a unit embedded in a ‘self-consistent’ environment. We then discuss how this basic mean-field approach can be extended to account for spatial dependences, in the form of space-dependent mean-field Fokker-Planck equations, for example. We also briefly review the use of kinetic theory in the framework of the Boltzmann equation, which is an appropriate description for dilute systems. We then turn to descriptions in terms of the full N-body distribution, starting from exact solutions of one-dimensional models, using a matrix-product ansatz method when correlations are present. Since exactly solvable models are scarce, we also present some approximation methods which can be used to determine the N-body distribution in a large system of dissipative units. These methods include the Edwards approach for dense granular matter and the approximate treatment of multiparticle Langevin equations with colored noise, which models systems of self-propelled particles. Throughout this review, emphasis is put on methodological aspects of the statistical modeling and on formal similarities between different physical problems, rather than on the specific behavior of a given system.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nunes, Rafael C.; Abreu, Everton M.C.; Neto, Jorge Ananias

    Based on the relationship between thermodynamics and gravity we propose, with the aid of Verlinde's formalism, an alternative interpretation of the dynamical evolution of the Friedmann-Robertson-Walker Universe. This description takes into account the entropy and temperature intrinsic to the horizon of the universe due to the information holographically stored there through non-gaussian statistical theories proposed by Tsallis and Kaniadakis. The effect of these non-gaussian statistics in the cosmological context is to change the strength of the gravitational constant. In this paper, we consider the w CDM model modified by the non-gaussian statistics and investigate the compatibility of these non-gaussian modificationmore » with the cosmological observations. In order to analyze in which extend the cosmological data constrain these non-extensive statistics, we will use type Ia supernovae, baryon acoustic oscillations, Hubble expansion rate function and the linear growth of matter density perturbations data. We show that Tsallis' statistics is favored at 1σ confidence level.« less

  5. Urban pavement surface temperature. Comparison of numerical and statistical approach

    NASA Astrophysics Data System (ADS)

    Marchetti, Mario; Khalifa, Abderrahmen; Bues, Michel; Bouilloud, Ludovic; Martin, Eric; Chancibaut, Katia

    2015-04-01

    The forecast of pavement surface temperature is very specific in the context of urban winter maintenance. to manage snow plowing and salting of roads. Such forecast mainly relies on numerical models based on a description of the energy balance between the atmosphere, the buildings and the pavement, with a canyon configuration. Nevertheless, there is a specific need in the physical description and the numerical implementation of the traffic in the energy flux balance. This traffic was originally considered as a constant. Many changes were performed in a numerical model to describe as accurately as possible the traffic effects on this urban energy balance, such as tires friction, pavement-air exchange coefficient, and infrared flux neat balance. Some experiments based on infrared thermography and radiometry were then conducted to quantify the effect fo traffic on urban pavement surface. Based on meteorological data, corresponding pavement temperature forecast were calculated and were compared with fiels measurements. Results indicated a good agreement between the forecast from the numerical model based on this energy balance approach. A complementary forecast approach based on principal component analysis (PCA) and partial least-square regression (PLS) was also developed, with data from thermal mapping usng infrared radiometry. The forecast of pavement surface temperature with air temperature was obtained in the specific case of urban configurtation, and considering traffic into measurements used for the statistical analysis. A comparison between results from the numerical model based on energy balance, and PCA/PLS was then conducted, indicating the advantages and limits of each approach.

  6. 75 FR 4323 - Additional Quantitative Fit-testing Protocols for the Respiratory Protection Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-27

    ... respirators (500 and 1000 for protocols 1 and 2, respectively). However, OSHA could not evaluate the results... the values of these descriptive statistics for revised PortaCount[supreg] QNFT protocols 1 (at RFFs of 100 and 500) and 2 (at RFFs of 200 and 1000). Table 2--Descriptive Statistics for RFFs of 100 and 200...

  7. On-Call Communication in Orthopaedic Trauma: "A Picture Is Worth a Thousand Words"--A Survey of OTA Members.

    PubMed

    Molina, Cesar S; Callan, Alexandra K; Burgos, Eduardo J; Mir, Hassan R

    2015-05-01

    To quantify the effects of varying clinical communication styles (verbal and pictorial) on the ability of orthopaedic trauma surgeons in understanding an injury and formulate an initial management plan. A Research Electronic Data Capture survey was e-mailed to all OTA members. Respondents quantified (5-point Likert scale) how confident they felt understanding an injury and establishing an initial management plan based on the information provided for 5 common orthopaedic trauma scenarios. Three verbal descriptions were created for each scenario and categorized as limited, moderate, or detailed. The questions were repeated with the addition of a radiographic image and then repeated a third time including a clinical photograph. Statistical evaluation consisted of descriptive statistics and Kruskal-Wallis analyses using STATA (version 12.0). Of the 221 respondents, there were a total of 95 who completed the entire survey. Nearly all were currently taking call (92/95 = 96.8%) and the majority were fellowship trained (79/95 = 83.2%). Most practice at a level I trauma center (58/95 = 61.1%) and work with orthopaedic residents (62/95 = 65.3%). There was a significant increase in confidence scores between a limited, moderate, and detailed description in all clinical scenarios for understanding the injury and establishing an initial management plan (P < 0.05). There was a significant difference in confidence scores between all 3 types of evidence presented (verbal, verbal + x-ray, verbal + x-ray + photograph) in both understanding and managing the injury for limited and moderate descriptions (P < 0.001). No differences were seen when adding pictorial information to the detailed verbal description. When comparing confidence scores between a detailed description without images and a limited description that includes radiographs and a photograph, no difference in confidence levels was seen in 7 of the 10 scenarios (P > 0.05). The addition of images in the form of radiographs and/or clinical photographs greatly improves the confidence of orthopaedic trauma surgeons in understanding injuries and establishing initial management plans with limited verbal information (P < 0.001). The inclusion of x-rays and photographs raises the confidence for understanding and management with limited verbal information to the level of a detailed verbal description in most scenarios. Mobile technology allows for easy secure transfer of images that can make up for the lack of available information from limited verbal descriptions because of the knowledge base of communicating providers.

  8. Exploring Marine Corps Officer Quality: An Analysis of Promotion to Lieutenant Colonel

    DTIC Science & Technology

    2017-03-01

    44 G. DESCRIPTIVE STATISTICS ................................................................44 1. Dependent...Variable Summary Statistics ...................................44 2. Performance...87 4. Further Research .........................................................................88 APPENDIX A. SUMMARY STATISTICS OF FITREP AND

  9. The Relationship between Anxiety and Coping Strategies in Family Caregivers of Patients with Trauma.

    PubMed

    Rahnama, Mozhgan; Shahdadi, Hosien; Bagheri, Somyeh; Moghadam, Mahdieh Poodineh; Absalan, Ahmad

    2017-04-01

    Traumatic events are of high incidence and affect not only the patient but also their family members, causing psychological problems such as stress and anxiety for caregivers of these patients. Therefore, the application of appropriate coping strategies by them seems necessary in order to promote mental health. To study the relationship of anxiety with coping strategies in family caregivers of trauma patients. The present research was a descriptive-correlational study which was carried out on 127 family caregivers of patients with trauma in intensive care unit, surgery ward and emergency unit of Amir al-Mu'minin Hospital of Zabol, Sistan and Baluchestan Province. The respondents were selected based on the convenience sampling method. Demographics questionnaire, DASS-21, and Coping Strategies questionnaire were used for data collection. The obtained data were statistically analysed using descriptive statistics, Analysis of Variance (ANOVA), t-test, and Pearson correlation coefficient in statistical package for the Social Sciences (SPSS) version 21.0. Based on the results, 89.9% of family caregivers suffer from mild to severe anxiety. The most common type of coping strategy used by the respondents was emotion-focused. The results showed no relationship between anxiety and emotion-centrism, but an inverse relationship was found between problem-centrism and anxiety. The majority of family caregivers had anxiety. Given, the inverse relationship between the level of anxiety and the use of problem-based coping strategy, in addition to identifying and reducing the causes of anxiety in caregivers. It is recommended that appropriate coping strategies should be trained to them.

  10. The Relationship between Anxiety and Coping Strategies in Family Caregivers of Patients with Trauma

    PubMed Central

    Rahnama, Mozhgan; Bagheri, Somyeh; Moghadam, Mahdieh Poodineh; Absalan, Ahmad

    2017-01-01

    Introduction Traumatic events are of high incidence and affect not only the patient but also their family members, causing psychological problems such as stress and anxiety for caregivers of these patients. Therefore, the application of appropriate coping strategies by them seems necessary in order to promote mental health. Aim To study the relationship of anxiety with coping strategies in family caregivers of trauma patients. Materials and Methods The present research was a descriptive-correlational study which was carried out on 127 family caregivers of patients with trauma in intensive care unit, surgery ward and emergency unit of Amir al-Mu’minin Hospital of Zabol, Sistan and Baluchestan Province. The respondents were selected based on the convenience sampling method. Demographics questionnaire, DASS-21, and Coping Strategies questionnaire were used for data collection. The obtained data were statistically analysed using descriptive statistics, Analysis of Variance (ANOVA), t-test, and Pearson correlation coefficient in statistical package for the Social Sciences (SPSS) version 21.0. Results Based on the results, 89.9% of family caregivers suffer from mild to severe anxiety. The most common type of coping strategy used by the respondents was emotion-focused. The results showed no relationship between anxiety and emotion-centrism, but an inverse relationship was found between problem-centrism and anxiety. Conclusion The majority of family caregivers had anxiety. Given, the inverse relationship between the level of anxiety and the use of problem-based coping strategy, in addition to identifying and reducing the causes of anxiety in caregivers. It is recommended that appropriate coping strategies should be trained to them. PMID:28571166

  11. Statistics of the geomagnetic secular variation for the past 5Ma

    NASA Technical Reports Server (NTRS)

    Constable, C. G.; Parker, R. L.

    1986-01-01

    A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.

  12. Statistics of the geomagnetic secular variation for the past 5 m.y

    NASA Technical Reports Server (NTRS)

    Constable, C. G.; Parker, R. L.

    1988-01-01

    A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.

  13. A stochastic fractional dynamics model of space-time variability of rain

    NASA Astrophysics Data System (ADS)

    Kundu, Prasun K.; Travis, James E.

    2013-09-01

    varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, which allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and time scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and on the Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to fit the second moment statistics of radar data at the smaller spatiotemporal scales. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well at these scales without any further adjustment.

  14. Exploring the Structure of Library and Information Science Web Space Based on Multivariate Analysis of Social Tags

    ERIC Educational Resources Information Center

    Joo, Soohyung; Kipp, Margaret E. I.

    2015-01-01

    Introduction: This study examines the structure of Web space in the field of library and information science using multivariate analysis of social tags from the Website, Delicious.com. A few studies have examined mathematical modelling of tags, mainly examining tagging in terms of tripartite graphs, pattern tracing and descriptive statistics. This…

  15. A Statistically Based Training Diagnostic Tool for Marine Aviation

    DTIC Science & Technology

    2014-06-01

    mission essential task list MDG maneuver description guide MOS military occupational specialty MSHARP Marine Sierra Hotel Aviation Reporting Program...include the Defense Readiness Reporting System (DRRS) Marine Corps, the Current Readiness Program (CRP), and the Marine Sierra Hotel Aviation...Beuschel, 2008). Many of these systems focus on business decisions regarding how companies can increase their bottom line, by appealing to customers more

  16. CIOs' Transformational Leadership Behaviors in Community Colleges: A Comparison-Based Approach to Improving Job Satisfaction of Information Technology Workers

    ERIC Educational Resources Information Center

    Abouelenein, Mahmoud S.

    2012-01-01

    The purpose of this quantitative, descriptive research study was to determine, through statistical analysis, any correlation between the perceived transformational leadership traits of CIOs at two-year community colleges in Kansas and measures of the job satisfaction among IT workers at those community colleges. The objectives of this research…

  17. Pipelines or Pipe Dreams? PhD Production and Other Matters in a South African Dental Research Institute 1954-2006

    ERIC Educational Resources Information Center

    Grossman, Elly S.; Cleaton-Jones, Peter E.

    2011-01-01

    This retrospective study documents the Masters and PhD training of 131 Dental Research Institute (DRI) postgraduates (1954-2006) to establish demographics, throughput and research outcomes for future PhD pipeline strategies using the DRI database. Descriptive statistics show four degree-based groups of postgraduates: 18 PhDs; 55 MScs; 42 MDents…

  18. The Importance of Psychological Needs for the Post Traumatic Stress Disorder (PTSD) and Displaced Children in Schools

    ERIC Educational Resources Information Center

    Uguak, Uget Apayo

    2011-01-01

    The study targets children in especially difficult circumstances from 8-14 years; and explores the importance of psychosocial needs for the PTSD and displaced children in schools. Out of 235 participants, descriptive statistics indicated that 63 children are traumatized. Based on ANOVA findings, the result revealed that there is significant effect…

  19. Priming English Past Tense Verbs: Rules or Statistics?

    ERIC Educational Resources Information Center

    Kielar, A.; Joanisse, Marc F.; Hare, M. L.

    2008-01-01

    A key question in language processing concerns the rule-like nature of many aspects of grammar. Much research on this topic has focused on English past tense morphology, which comprises a regular, rule-like pattern (e.g., bake-baked) and a set of irregular forms that defy a rule-based description (e.g., take-took). Previous studies have used past…

  20. Spatiotemporal chaos of self-replicating spots in reaction-diffusion systems.

    PubMed

    Wang, Hongli; Ouyang, Qi

    2007-11-23

    The statistical properties of self-replicating spots in the reaction-diffusion Gray-Scott model are analyzed. In the chaotic regime of the system, the spots that dominate the spatiotemporal chaos grow and divide in two or decay into the background randomly and continuously. The rates at which the spots are created and decay are observed to be linearly dependent on the number of spots in the system. We derive a probabilistic description of the spot dynamics based on the statistical independence of spots and thus propose a characterization of the spatiotemporal chaos dominated by replicating spots.

  1. An Algebraic Implicitization and Specialization of Minimum KL-Divergence Models

    NASA Astrophysics Data System (ADS)

    Dukkipati, Ambedkar; Manathara, Joel George

    In this paper we study representation of KL-divergence minimization, in the cases where integer sufficient statistics exists, using tools from polynomial algebra. We show that the estimation of parametric statistical models in this case can be transformed to solving a system of polynomial equations. In particular, we also study the case of Kullback-Csisźar iteration scheme. We present implicit descriptions of these models and show that implicitization preserves specialization of prior distribution. This result leads us to a Gröbner bases method to compute an implicit representation of minimum KL-divergence models.

  2. Prevalence of Tuberculosis among Veterans, Military Personnel and their Families in East Azerbaijan Province Violators of the last 15 Years.

    PubMed

    Azad Aminjan, Maboud; Moaddab, Seyyed Reza; Hosseini Ravandi, Mohammad; Kazemi Haki, Behzad

    2015-10-01

    Nowadays in the world, tuberculosis is the second largest killer of adults after HIV. Due to the location of presidios that is mostly located in hazardous zones soldiers and army personnel are considered high risk, therefore we decided to determine the prevalence of tuberculosis status in this group of people. This was a cross-sectional descriptive research that studied the prevalence of pulmonary tuberculosis in soldiers and military personnel in the last 15 years in tuberculosis and lung disease research center at Tabriz University of Medical Sciences. The statistical population consisted of all the soldiers and military personnel. The detection method in this study was based on microscopic examination following Ziehl-Neelsen Stain and in Leuven Stein Johnson culturing. Descriptive statistics was used for statistical analysis and statistical values less than 0.05 were considered significant. By review information in this center since the 1988-2013 with 72 military personnel suffering from tuberculosis, it was revealed that among them 30 women, 42 men, 14 soldiers, 29 family members, and 29 military personnel are pointed. A significant correlation was found between TB rates among military personnel and their families. Although in recent years, the national statistics indicate a decline of tuberculosis, but the results of our study showed that TB is still a serious disease that must comply with the first symptoms of tuberculosis in military personnel and their families that should be diagnosed as soon as possible.

  3. A Multidisciplinary Approach for Teaching Statistics and Probability

    ERIC Educational Resources Information Center

    Rao, C. Radhakrishna

    1971-01-01

    The author presents a syllabus for an introductory (first year after high school) course in statistics and probability and some methods of teaching statistical techniques. The description comes basically from the procedures used at the Indian Statistical Institute, Calcutta. (JG)

  4. The Performance and Retention of Female Navy Officers with a Military Spouse

    DTIC Science & Technology

    2017-03-01

    5 2. Female Officer Retention and Dual-Military Couples ...............7 3. Demographic Statistics ...23 III. DATA DESCRIPTION AND STATISTICS ...28 2. Independent Variables.................................................................31 C. SUMMARY STATISTICS

  5. New statistical scission-point model to predict fission fragment observables

    NASA Astrophysics Data System (ADS)

    Lemaître, Jean-François; Panebianco, Stefano; Sida, Jean-Luc; Hilaire, Stéphane; Heinrich, Sophie

    2015-09-01

    The development of high performance computing facilities makes possible a massive production of nuclear data in a full microscopic framework. Taking advantage of the individual potential calculations of more than 7000 nuclei, a new statistical scission-point model, called SPY, has been developed. It gives access to the absolute available energy at the scission point, which allows the use of a parameter-free microcanonical statistical description to calculate the distributions and the mean values of all fission observables. SPY uses the richness of microscopy in a rather simple theoretical framework, without any parameter except the scission-point definition, to draw clear answers based on perfect knowledge of the ingredients involved in the model, with very limited computing cost.

  6. Descriptive Statistics and Cluster Analysis for Extreme Rainfall in Java Island

    NASA Astrophysics Data System (ADS)

    E Komalasari, K.; Pawitan, H.; Faqih, A.

    2017-03-01

    This study aims to describe regional pattern of extreme rainfall based on maximum daily rainfall for period 1983 to 2012 in Java Island. Descriptive statistics analysis was performed to obtain centralization, variation and distribution of maximum precipitation data. Mean and median are utilized to measure central tendency data while Inter Quartile Range (IQR) and standard deviation are utilized to measure variation of data. In addition, skewness and kurtosis used to obtain shape the distribution of rainfall data. Cluster analysis using squared euclidean distance and ward method is applied to perform regional grouping. Result of this study show that mean (average) of maximum daily rainfall in Java Region during period 1983-2012 is around 80-181mm with median between 75-160mm and standard deviation between 17 to 82. Cluster analysis produces four clusters and show that western area of Java tent to have a higher annual maxima of daily rainfall than northern area, and have more variety of annual maximum value.

  7. Back to basics: an introduction to statistics.

    PubMed

    Halfens, R J G; Meijers, J M M

    2013-05-01

    In the second in the series, Professor Ruud Halfens and Dr Judith Meijers give an overview of statistics, both descriptive and inferential. They describe the first principles of statistics, including some relevant inferential tests.

  8. Students' attitudes towards learning statistics

    NASA Astrophysics Data System (ADS)

    Ghulami, Hassan Rahnaward; Hamid, Mohd Rashid Ab; Zakaria, Roslinazairimah

    2015-05-01

    Positive attitude towards learning is vital in order to master the core content of the subject matters under study. This is unexceptional in learning statistics course especially at the university level. Therefore, this study investigates the students' attitude towards learning statistics. Six variables or constructs have been identified such as affect, cognitive competence, value, difficulty, interest, and effort. The instrument used for the study is questionnaire that was adopted and adapted from the reliable instrument of Survey of Attitudes towards Statistics(SATS©). This study is conducted to engineering undergraduate students in one of the university in the East Coast of Malaysia. The respondents consist of students who were taking the applied statistics course from different faculties. The results are analysed in terms of descriptive analysis and it contributes to the descriptive understanding of students' attitude towards the teaching and learning process of statistics.

  9. Attention-deficit hyperactivity disorder in ancient Greece: The Obtuse Man of Theophrastus.

    PubMed

    Victor, Marcelo M; S da Silva, Bruna; Kappel, Djenifer B; Bau, Claiton Hd; Grevet, Eugenio H

    2018-06-01

    We present an ancient Greek description written by the philosopher Theophrastus in his classic book ' Characters' comparable with modern attention-deficit hyperactivity disorder. The arguments are based in one chapter of this book-The Obtuse Man-presenting features of a character closely resembling the modern description of attention-deficit hyperactivity disorder. In a free comparative exercise, we compared Theophrastus descriptions with modern Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5) attention-deficit hyperactivity disorder symptoms. The sentences describing The Obtuse Man written by Theophrastus are similar to several symptoms of attention-deficit hyperactivity disorder and he would probably be currently diagnosed with this disorder as an adult. To our knowledge, this is the oldest description compatible with the current conception of attention-deficit hyperactivity disorder in adults in the Western literature. Differently than the moralistic view of ancient Greece regarding those symptoms, the medical attention-deficit hyperactivity disorder conception may be advantageous to patients since it might reduce prejudice and allow individuals to seek treatment.

  10. Skeletal and dental effects of rapid maxillary expansion assessed through three-dimensional imaging: A multicenter study.

    PubMed

    Luebbert, Joshua; Ghoneima, Ahmed; Lagravère, Manuel O

    2016-03-01

    The aim of this study was to determine the skeletal and dental changes in rapid maxillary expansion treatments in two different populations assessed through cone-beam computer tomography (CBCT). Twenty-one patients from Edmonton, Canada and 16 patients from Cairo, Egypt with maxillary transverse deficiency (11-17 years old) were treated with a tooth-borne maxillary expander (Hyrax). CBCTs were obtained from each patient at two time points (initial T1 and at removal of appliance at 3-6 months T2). CBCTs were analyzed using AVIZO software and landmarks were placed on skeletal and dental anatomical structures on the cranial base, maxilla and mandible. Descriptive statistics, intraclass correlation coefficients and one-way ANOVA analysis were used to determine if there were skeletal and dental changes and if these changes were statistically different between both populations. Descriptive statistics show that dental changes were larger than skeletal changes for both populations. Skeletal and dental changes between populations were not statistically different (P<0.05) from each other with the exception of the upper incisor proclination being larger in the Indiana group (P>0.05). Rapid maxillary expansion treatments in different populations demonstrate similar skeletal and dental changes. These changes are greater on the dental structures compared to the skeletal ones in a 4:1 ratio. Copyright © 2015 CEO. Published by Elsevier Masson SAS. All rights reserved.

  11. Student's Conceptions in Statistical Graph's Interpretation

    ERIC Educational Resources Information Center

    Kukliansky, Ida

    2016-01-01

    Histograms, box plots and cumulative distribution graphs are popular graphic representations for statistical distributions. The main research question that this study focuses on is how college students deal with interpretation of these statistical graphs when translating graphical representations into analytical concepts in descriptive statistics.…

  12. Descriptive Statistics: Reporting the Answers to the 5 Basic Questions of Who, What, Why, When, Where, and a Sixth, So What?

    PubMed

    Vetter, Thomas R

    2017-11-01

    Descriptive statistics are specific methods basically used to calculate, describe, and summarize collected research data in a logical, meaningful, and efficient way. Descriptive statistics are reported numerically in the manuscript text and/or in its tables, or graphically in its figures. This basic statistical tutorial discusses a series of fundamental concepts about descriptive statistics and their reporting. The mean, median, and mode are 3 measures of the center or central tendency of a set of data. In addition to a measure of its central tendency (mean, median, or mode), another important characteristic of a research data set is its variability or dispersion (ie, spread). In simplest terms, variability is how much the individual recorded scores or observed values differ from one another. The range, standard deviation, and interquartile range are 3 measures of variability or dispersion. The standard deviation is typically reported for a mean, and the interquartile range for a median. Testing for statistical significance, along with calculating the observed treatment effect (or the strength of the association between an exposure and an outcome), and generating a corresponding confidence interval are 3 tools commonly used by researchers (and their collaborating biostatistician or epidemiologist) to validly make inferences and more generalized conclusions from their collected data and descriptive statistics. A number of journals, including Anesthesia & Analgesia, strongly encourage or require the reporting of pertinent confidence intervals. A confidence interval can be calculated for virtually any variable or outcome measure in an experimental, quasi-experimental, or observational research study design. Generally speaking, in a clinical trial, the confidence interval is the range of values within which the true treatment effect in the population likely resides. In an observational study, the confidence interval is the range of values within which the true strength of the association between the exposure and the outcome (eg, the risk ratio or odds ratio) in the population likely resides. There are many possible ways to graphically display or illustrate different types of data. While there is often latitude as to the choice of format, ultimately, the simplest and most comprehensible format is preferred. Common examples include a histogram, bar chart, line chart or line graph, pie chart, scatterplot, and box-and-whisker plot. Valid and reliable descriptive statistics can answer basic yet important questions about a research data set, namely: "Who, What, Why, When, Where, How, How Much?"

  13. Data on the relationships between financing strategies, entrepreneurial competencies and business growth of technology-based SMEs in Nigeria.

    PubMed

    Ibidunni, Ayodotun Stephen; Kehinde, Oladele Joseph; Ibidunni, Oyebisi Mary; Olokundun, Maxwell Ayodele; Olubusayo, Falola Hezekiah; Salau, Odunayo Paul; Borishade, Taiye Tairat; Fred, Peter

    2018-06-01

    The article presents data on the relationship between financing strategies, entrepreneurial competencies and business growth of technology-based SMEs in Nigeria. Copies of structured questionnaire were administered to 233 SME owners and financial managers. Using descriptive and standard multiple regression statistical analysis, the data revealed that venture capital and business donations significantly influences profit growth of technology-based SMEs. Moreover, the data revealed that technology-`based firms can enhance their access to financing through capacity building in entrepreneurial competencies, such as acquiring the right skills and attitude.

  14. Significant Pre-Accession Factors Predicting Success or Failure During a Marine Corps Officer’s Initial Service Obligation

    DTIC Science & Technology

    2015-12-01

    WAIVERS ..............................................................................................49  APPENDIX C. DESCRIPTIVE STATISTICS ... Statistics of Dependent Variables. .............................................23  Table 6.  Summary Statistics of Academics Variables...24  Table 7.  Summary Statistics of Application Variables ............................................25  Table 8

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

    Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy,more » and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics (which we discussed in [1]) where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel. We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.« less

  16. 21 CFR 314.50 - Content and format of an application.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... the protocol and a description of the statistical analyses used to evaluate the study. If the study... application: (i) Three copies of the analytical procedures and related descriptive information contained in... the samples and to validate the applicant's analytical procedures. The related descriptive information...

  17. 21 CFR 314.50 - Content and format of an application.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... the protocol and a description of the statistical analyses used to evaluate the study. If the study... application: (i) Three copies of the analytical procedures and related descriptive information contained in... the samples and to validate the applicant's analytical procedures. The related descriptive information...

  18. 21 CFR 314.50 - Content and format of an application.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... the protocol and a description of the statistical analyses used to evaluate the study. If the study... application: (i) Three copies of the analytical procedures and related descriptive information contained in... the samples and to validate the applicant's analytical procedures. The related descriptive information...

  19. 21 CFR 314.50 - Content and format of an application.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... the protocol and a description of the statistical analyses used to evaluate the study. If the study... application: (i) Three copies of the analytical procedures and related descriptive information contained in... the samples and to validate the applicant's analytical procedures. The related descriptive information...

  20. 21 CFR 314.50 - Content and format of an application.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... the protocol and a description of the statistical analyses used to evaluate the study. If the study... application: (i) Three copies of the analytical procedures and related descriptive information contained in... the samples and to validate the applicant's analytical procedures. The related descriptive information...

  1. Ion Channel Conductance Measurements on a Silicon-Based Platform

    DTIC Science & Technology

    2006-01-01

    calculated using the molecular dynamics code, GROMACS . Reasonable agreement is obtained in the simulated versus measured conductance over the range of...measurements of the lipid giga-seal characteristics have been performed, including AC conductance measurements and statistical analysis in order to...Dynamics kernel self-consistently coupled to Poisson equations using a P3M force field scheme and the GROMACS description of protein structure and

  2. North Texas Sediment Budget: Sabine Pass to San Luis Pass

    DTIC Science & Technology

    2006-09-01

    concrete units have been placed over sand-filled fabric tube . .......................................33 Figure 28. Sand-filled fabric tubes protecting...system UTM Zone 15, NAD 83 Longshore drift directions King (in preparation) Based on wave hindcast statistics and limited buoy data Rollover Pass...along with descriptions of the jetties and limited geographic coordinate data1 (Figure 18). The original velum or Mylar sheets from which the report

  3. A New Mathematical Framework for Design Under Uncertainty

    DTIC Science & Technology

    2016-05-05

    blending multiple information sources via auto-regressive stochastic modeling. A computationally efficient machine learning framework is developed based on...sion and machine learning approaches; see Fig. 1. This will lead to a comprehensive description of system performance with less uncertainty than in the...Bayesian optimization of super-cavitating hy- drofoils The goal of this study is to demonstrate the capabilities of statistical learning and

  4. Hazardous Communication and Tools for Quality: Basic Statistics. Responsive Text. Educational Materials for the Workplace.

    ERIC Educational Resources Information Center

    Vermont Inst. for Self-Reliance, Rutland.

    This guide provides a description of Responsive Text (RT), a method for presenting job-relevant information within a computer-based support system. A summary of what RT is and why it is important is provided first. The first section of the guide provides a brief overview of what research tells about the reading process and how the general design…

  5. One Yard Below: Education Statistics from a Different Angle.

    ERIC Educational Resources Information Center

    Education Intelligence Agency, Carmichael, CA.

    This report offers a different perspective on education statistics by highlighting rarely used "stand-alone" statistics on public education, inputs, outputs, and descriptions, and it uses interactive statistics that combine two or more statistics in an unusual way. It is a report that presents much evidence, but few conclusions. It is not intended…

  6. A Bibliography of Statistical Applications in Geography, Technical Paper No. 9.

    ERIC Educational Resources Information Center

    Greer-Wootten, Bryn; And Others

    Included in this bibliography are resource materials available to both college instructors and students on statistical applications in geographic research. Two stages of statistical development are treated in the bibliography. They are 1) descriptive statistics, in which the sample is the focus of interest, and 2) analytical statistics, in which…

  7. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  8. Research Education in Undergraduate Occupational Therapy Programs.

    ERIC Educational Resources Information Center

    Petersen, Paul; And Others

    1992-01-01

    Of 63 undergraduate occupational therapy programs surveyed, the 38 responses revealed some common areas covered: elementary descriptive statistics, validity, reliability, and measurement. Areas underrepresented include statistical analysis with or without computers, research design, and advanced statistics. (SK)

  9. Policy Safeguards and the Legitimacy of Highway Interdiction

    DTIC Science & Technology

    2016-12-01

    17 B. BIAS WITHIN LAW ENFORCEMENT ..............................................19 C. STATISTICAL DATA GATHERING...32 3. Controlling Discretion .................................................................36 4. Statistical Data Collection for Traffic Stops...49 A. DESCRIPTION OF STATISTICAL DATA COLLECTED ...............50 B. DATA ORGANIZATION AND ANALYSIS

  10. Fish: A New Computer Program for Friendly Introductory Statistics Help

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Raffle, Holly

    2005-01-01

    All introductory statistics students must master certain basic descriptive statistics, including means, standard deviations and correlations. Students must also gain insight into such complex concepts as the central limit theorem and standard error. This article introduces and describes the Friendly Introductory Statistics Help (FISH) computer…

  11. Meta-analyses and Forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis.

    PubMed

    Neyeloff, Jeruza L; Fuchs, Sandra C; Moreira, Leila B

    2012-01-20

    Meta-analyses are necessary to synthesize data obtained from primary research, and in many situations reviews of observational studies are the only available alternative. General purpose statistical packages can meta-analyze data, but usually require external macros or coding. Commercial specialist software is available, but may be expensive and focused in a particular type of primary data. Most available softwares have limitations in dealing with descriptive data, and the graphical display of summary statistics such as incidence and prevalence is unsatisfactory. Analyses can be conducted using Microsoft Excel, but there was no previous guide available. We constructed a step-by-step guide to perform a meta-analysis in a Microsoft Excel spreadsheet, using either fixed-effect or random-effects models. We have also developed a second spreadsheet capable of producing customized forest plots. It is possible to conduct a meta-analysis using only Microsoft Excel. More important, to our knowledge this is the first description of a method for producing a statistically adequate but graphically appealing forest plot summarizing descriptive data, using widely available software.

  12. Meta-analyses and Forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis

    PubMed Central

    2012-01-01

    Background Meta-analyses are necessary to synthesize data obtained from primary research, and in many situations reviews of observational studies are the only available alternative. General purpose statistical packages can meta-analyze data, but usually require external macros or coding. Commercial specialist software is available, but may be expensive and focused in a particular type of primary data. Most available softwares have limitations in dealing with descriptive data, and the graphical display of summary statistics such as incidence and prevalence is unsatisfactory. Analyses can be conducted using Microsoft Excel, but there was no previous guide available. Findings We constructed a step-by-step guide to perform a meta-analysis in a Microsoft Excel spreadsheet, using either fixed-effect or random-effects models. We have also developed a second spreadsheet capable of producing customized forest plots. Conclusions It is possible to conduct a meta-analysis using only Microsoft Excel. More important, to our knowledge this is the first description of a method for producing a statistically adequate but graphically appealing forest plot summarizing descriptive data, using widely available software. PMID:22264277

  13. The Practicality of Statistical Physics Handout Based on KKNI and the Constructivist Approach

    NASA Astrophysics Data System (ADS)

    Sari, S. Y.; Afrizon, R.

    2018-04-01

    Statistical physics lecture shows that: 1) the performance of lecturers, social climate, students’ competence and soft skills needed at work are in enough category, 2) students feel difficulties in following the lectures of statistical physics because it is abstract, 3) 40.72% of students needs more understanding in the form of repetition, practice questions and structured tasks, and 4) the depth of statistical physics material needs to be improved gradually and structured. This indicates that learning materials in accordance of The Indonesian National Qualification Framework or Kerangka Kualifikasi Nasional Indonesia (KKNI) with the appropriate learning approach are needed to help lecturers and students in lectures. The author has designed statistical physics handouts which have very valid criteria (90.89%) according to expert judgment. In addition, the practical level of handouts designed also needs to be considered in order to be easy to use, interesting and efficient in lectures. The purpose of this research is to know the practical level of statistical physics handout based on KKNI and a constructivist approach. This research is a part of research and development with 4-D model developed by Thiagarajan. This research activity has reached part of development test at Development stage. Data collection took place by using a questionnaire distributed to lecturers and students. Data analysis using descriptive data analysis techniques in the form of percentage. The analysis of the questionnaire shows that the handout of statistical physics has very practical criteria. The conclusion of this study is statistical physics handouts based on the KKNI and constructivist approach have been practically used in lectures.

  14. Findings From a Nursing Care Audit Based on the Nursing Process: A Descriptive Study

    PubMed Central

    Poortaghi, Sarieh; Salsali, Mahvash; Ebadi, Abbas; Rahnavard, Zahra; Maleki, Farzaneh

    2015-01-01

    Background: Although using the nursing process improves nursing care quality, few studies have evaluated nursing performance in accordance with nursing process steps either nationally or internationally. Objectives: This study aimed to audit nursing care based on a nursing process model. Patients and Methods: This was a cross-sectional descriptive study in which a nursing audit checklist was designed and validated for assessing nurses’ compliance with nursing process. A total of 300 nurses from various clinical settings of Tehran university of medical sciences were selected. Data were analyzed using descriptive and inferential statistics, including frequencies, Pearson correlation coefficient and independent samples t-tests. Results: The compliance rate of nursing process indicators was 79.71 ± 0.87. Mean compliance scores did not significantly differ by education level and gender. However, overall compliance scores were correlated with nurses’ age (r = 0.26, P = 0.001) and work experience (r = 0.273, P = 0.001). Conclusions: Nursing process indicators can be used to audit nursing care. Such audits can be used as quality assurance tools. PMID:26576448

  15. A Study of Strengths and Weaknesses of Descriptive Assessment from Principals, Teachers and Experts Points of View in Chaharmahal and Bakhteyari Primary Schools

    ERIC Educational Resources Information Center

    Sharief, Mostafa; Naderi, Mahin; Hiedari, Maryam Shoja; Roodbari, Omolbanin; Jalilvand, Mohammad Reza

    2012-01-01

    The aim of current study is to determine the strengths and weaknesses of descriptive evaluation from the viewpoint of principals, teachers and experts of Chaharmahal and Bakhtiari province. A descriptive survey was performed. Statistical population includes 208 principals, 303 teachers, and 100 executive experts of descriptive evaluation scheme in…

  16. Intercomparison of textural parameters of intertidal sediments generated by different statistical procedures, and implications for a unifying descriptive nomenclature

    NASA Astrophysics Data System (ADS)

    Fan, Daidu; Tu, Junbiao; Cai, Guofu; Shang, Shuai

    2015-06-01

    Grain-size analysis is a basic routine in sedimentology and related fields, but diverse methods of sample collection, processing and statistical analysis often make direct comparisons and interpretations difficult or even impossible. In this paper, 586 published grain-size datasets from the Qiantang Estuary (East China Sea) sampled and analyzed by the same procedures were merged and their textural parameters calculated by a percentile and two moment methods. The aim was to explore which of the statistical procedures performed best in the discrimination of three distinct sedimentary units on the tidal flats of the middle Qiantang Estuary. A Gaussian curve-fitting method served to simulate mixtures of two normal populations having different modal sizes, sorting values and size distributions, enabling a better understanding of the impact of finer tail components on textural parameters, as well as the proposal of a unifying descriptive nomenclature. The results show that percentile and moment procedures yield almost identical results for mean grain size, and that sorting values are also highly correlated. However, more complex relationships exist between percentile and moment skewness (kurtosis), changing from positive to negative correlations when the proportions of the finer populations decrease below 35% (10%). This change results from the overweighting of tail components in moment statistics, which stands in sharp contrast to the underweighting or complete amputation of small tail components by the percentile procedure. Intercomparisons of bivariate plots suggest an advantage of the Friedman & Johnson moment procedure over the McManus moment method in terms of the description of grain-size distributions, and over the percentile method by virtue of a greater sensitivity to small variations in tail components. The textural parameter scalings of Folk & Ward were translated into their Friedman & Johnson moment counterparts by application of mathematical functions derived by regression analysis of measured and modeled grain-size data, or by determining the abscissa values of intersections between auxiliary lines running parallel to the x-axis and vertical lines corresponding to the descriptive percentile limits along the ordinate of representative bivariate plots. Twofold limits were extrapolated for the moment statistics in relation to single descriptive terms in the cases of skewness and kurtosis by considering both positive and negative correlations between percentile and moment statistics. The extrapolated descriptive scalings were further validated by examining entire size-frequency distributions simulated by mixing two normal populations of designated modal size and sorting values, but varying in mixing ratios. These were found to match well in most of the proposed scalings, although platykurtic and very platykurtic categories were questionable when the proportion of the finer population was below 5%. Irrespective of the statistical procedure, descriptive nomenclatures should therefore be cautiously used when tail components contribute less than 5% to grain-size distributions.

  17. Coastal and Marine Bird Data Base

    USGS Publications Warehouse

    Anderson, S.H.; Geissler, P.H.; Dawson, D.K.

    1980-01-01

    Summary: This report discusses the development of a coastal and marine bird data base at the Migratory Bird and Habitat Research Laboratory. The system is compared with other data bases, and suggestions for future development, such as possible adaptations for other taxonomic groups, are included. The data base is based on the Statistical Analysis System but includes extensions programmed in PL/I. The Appendix shows how the system evolved. Output examples are given for heron data and pelagic bird data which indicate the types of analyses that can be conducted and output figures. The Appendixes include a retrieval language user's guide and description of the retrieval process and listing of translator program.

  18. Statistical Tutorial | Center for Cancer Research

    Cancer.gov

    Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data.  ST is designed as a follow up to Statistical Analysis of Research Data (SARD) held in April 2018.  The tutorial will apply the general principles of statistical analysis of research data including descriptive statistics, z- and t-tests of means and mean

  19. Gestational surrogacy: Viewpoint of Iranian infertile women

    PubMed Central

    Rahmani, Azad; Sattarzadeh, Nilofar; Gholizadeh, Leila; Sheikhalipour, Zahra; Allahbakhshian, Atefeh; Hassankhani, Hadi

    2011-01-01

    BACKGROUND: Surrogacy is a popular form of assisted reproductive technology of which only gestational form is approved by most of the religious scholars in Iran. Little evidence exists about the Iranian infertile women's viewpoint regarding gestational surrogacy. AIM: To assess the viewpoint of Iranian infertile women toward gestational surrogacy. SETTING AND DESIGN: This descriptive study was conducted at the infertility clinic of Tabriz University of Medical Sciences, Iran. MATERIALS AND METHODS: The study sample consisted of 238 infertile women who were selected using the eligible sampling method. Data were collected by using a researcher developed questionnaire that included 25 items based on a five-point Likert scale. STATISTICAL ANALYSIS: Data analysis was conducted by SPSS statistical software using descriptive statistics. RESULTS: Viewpoint of 214 women (89.9%) was positive. 36 (15.1%) women considered gestational surrogacy against their religious beliefs; 170 women (71.4%) did not assume the commissioning couple as owners of the baby; 160 women (67.2%) said that children who were born through surrogacy would better not know about it; and 174 women (73.1%) believed that children born through surrogacy will face mental problems. CONCLUSION: Iranian infertile women have positive viewpoint regarding the surrogacy. However, to increase the acceptability of surrogacy among infertile women, further efforts are needed. PMID:22346081

  20. Patch-Based Generative Shape Model and MDL Model Selection for Statistical Analysis of Archipelagos

    NASA Astrophysics Data System (ADS)

    Ganz, Melanie; Nielsen, Mads; Brandt, Sami

    We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning a patch-based dictionary for possible shapes, (2) building up a time-homogeneous Markov model to model the neighbourhood correlations between the patches, and (3) automatic selection of the model complexity by the minimum description length principle. The generative shape model is proposed as a probability distribution of a binary image where the model is intended to facilitate sequential simulation. Our results show that a relatively simple model is able to generate structures visually similar to calcifications. Furthermore, we used the shape model as a shape prior in the statistical segmentation of calcifications, where the area overlap with the ground truth shapes improved significantly compared to the case where the prior was not used.

  1. One-Carbon Metabolism and Breast Cancer Survival in a Population-Based Study

    DTIC Science & Technology

    2008-06-01

    the dietary intake of one- carbon-related micronutrients /compounds (e.g. folate, methionine, chioline, B vitamins, alcohol, etc) in relation to...examine the dietary intake of one-carbon-related micronutrients /compounds (e.g. folate, methionine, chioline, B vitamins, alcohol, etc) in relation to...of dietary methyl content and overall survival. Some descriptive statistical analysis has been reported in previous annual report. The Kaplan-Meier

  2. Description of Merged Data Base: Appendix F. The Development of Institutions of Higher Education. Theory and Assessment of Impact of Four Possible Areas of Federal Intervention.

    ERIC Educational Resources Information Center

    Jackson, Gregory A.

    Data in this appendix were used in the Office of Education-supported research study of the development of higher education institutions. Machine-readable quantitative data were gathered from the National Center for Education Statistics, the Office of Civil Rights, the Council on Financial Aid to Education, the National Education Data Library, and…

  3. Profile-IQ: Web-based data query system for local health department infrastructure and activities.

    PubMed

    Shah, Gulzar H; Leep, Carolyn J; Alexander, Dayna

    2014-01-01

    To demonstrate the use of National Association of County & City Health Officials' Profile-IQ, a Web-based data query system, and how policy makers, researchers, the general public, and public health professionals can use the system to generate descriptive statistics on local health departments. This article is a descriptive account of an important health informatics tool based on information from the project charter for Profile-IQ and the authors' experience and knowledge in design and use of this query system. Profile-IQ is a Web-based data query system that is based on open-source software: MySQL 5.5, Google Web Toolkit 2.2.0, Apache Commons Math library, Google Chart API, and Tomcat 6.0 Web server deployed on an Amazon EC2 server. It supports dynamic queries of National Profile of Local Health Departments data on local health department finances, workforce, and activities. Profile-IQ's customizable queries provide a variety of statistics not available in published reports and support the growing information needs of users who do not wish to work directly with data files for lack of staff skills or time, or to avoid a data use agreement. Profile-IQ also meets the growing demand of public health practitioners and policy makers for data to support quality improvement, community health assessment, and other processes associated with voluntary public health accreditation. It represents a step forward in the recent health informatics movement of data liberation and use of open source information technology solutions to promote public health.

  4. Stochastic Individual-Based Modeling of Bacterial Growth and Division Using Flow Cytometry.

    PubMed

    García, Míriam R; Vázquez, José A; Teixeira, Isabel G; Alonso, Antonio A

    2017-01-01

    A realistic description of the variability in bacterial growth and division is critical to produce reliable predictions of safety risks along the food chain. Individual-based modeling of bacteria provides the theoretical framework to deal with this variability, but it requires information about the individual behavior of bacteria inside populations. In this work, we overcome this problem by estimating the individual behavior of bacteria from population statistics obtained with flow cytometry. For this objective, a stochastic individual-based modeling framework is defined based on standard assumptions during division and exponential growth. The unknown single-cell parameters required for running the individual-based modeling simulations, such as cell size growth rate, are estimated from the flow cytometry data. Instead of using directly the individual-based model, we make use of a modified Fokker-Plank equation. This only equation simulates the population statistics in function of the unknown single-cell parameters. We test the validity of the approach by modeling the growth and division of Pediococcus acidilactici within the exponential phase. Estimations reveal the statistics of cell growth and division using only data from flow cytometry at a given time. From the relationship between the mother and daughter volumes, we also predict that P. acidilactici divide into two successive parallel planes.

  5. Sexual Assault Prevention and Response Climate DEOCS 4.1 Construct Validity Summary

    DTIC Science & Technology

    2017-08-01

    DEOCS, (7) examining variance and descriptive statistics (8) examining the relationship among items/areas to reduce multicollinearity, and (9...selecting items that demonstrate the strongest scale properties. Included is a review of the 4.0 description and items, followed by the proposed...Tables 1 – 7 for the description of each measure and corresponding items. Table 1. DEOCS 4.0 Perceptions of Safety Measure Description

  6. Radiation from quantum weakly dynamical horizons in loop quantum gravity.

    PubMed

    Pranzetti, Daniele

    2012-07-06

    We provide a statistical mechanical analysis of quantum horizons near equilibrium in the grand canonical ensemble. By matching the description of the nonequilibrium phase in terms of weakly dynamical horizons with a local statistical framework, we implement loop quantum gravity dynamics near the boundary. The resulting radiation process provides a quantum gravity description of the horizon evaporation. For large black holes, the spectrum we derive presents a discrete structure which could be potentially observable.

  7. Needs of family caregivers in home care for older adults 1

    PubMed Central

    Bierhals, Carla Cristiane Becker Kottwitz; dos Santos, Naiana Oliveira; Fengler, Fernanda Laís; Raubustt, Kamila Dellamora; Forbes, Dorothy Anne; Paskulin, Lisiane Manganelli Girardi

    2017-01-01

    ABSTRACT Objective: to reveal the felt and normative needs of primary family caregivers when providing instrumental support to older adults enrolled in a Home Care Program in a Primary Health Service in the South of Brazil. Methods: using Bradshaw's taxonomy of needs to explore the caregiver's felt needs (stated needs) and normative needs (defined by professionals), a mixed exploratory study was conducted in three steps: Descriptive quantitative phase with 39 older adults and their caregiver, using a data sheet based on patient records; Qualitative exploratory phase that included 21 caregiver interviews, analyzed by content analysis; Systematic observation, using an observation guide with 16 caregivers, analyzed by descriptive statistics. Results: the felt needs were related to information about instrumental support activities and subjective aspects of care. Caregivers presented more normative needs related to medications care. Conclusion: understanding caregivers' needs allows nurses to plan interventions based on their particularities. PMID:28403338

  8. APPLEPIPS /Apple Personal Image Processing System/ - An interactive digital image processing system for the Apple II microcomputer

    NASA Technical Reports Server (NTRS)

    Masuoka, E.; Rose, J.; Quattromani, M.

    1981-01-01

    Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.

  9. Understanding common statistical methods, Part I: descriptive methods, probability, and continuous data.

    PubMed

    Skinner, Carl G; Patel, Manish M; Thomas, Jerry D; Miller, Michael A

    2011-01-01

    Statistical methods are pervasive in medical research and general medical literature. Understanding general statistical concepts will enhance our ability to critically appraise the current literature and ultimately improve the delivery of patient care. This article intends to provide an overview of the common statistical methods relevant to medicine.

  10. 34 CFR 668.49 - Institutional fire safety policies and fire statistics.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Institutional fire safety policies and fire statistics... fire statistics. (a) Additional definitions that apply to this section. Cause of fire: The factor or... statistics described in paragraph (c) of this section. (2) A description of each on-campus student housing...

  11. A multi-sector assessment of community organizational capacity for promotion of Chinese immigrant worker health.

    PubMed

    Tsai, Jenny H-C; Thompson, Elaine A

    2017-12-01

    Community-based collaborative approaches have received increased attention as a means for addressing occupational health disparities. Organizational capacity, highly relevant to engaging and sustaining community partnerships, however, is rarely considered in occupational health research. To characterize community organizational capacity specifically relevant to Chinese immigrant worker health, we used a cross-sectional, descriptive design with 36 agencies from six community sectors in King County, Washington. Joint interviews, conducted with two representatives from each agency, addressed three dimensions of organizational capacity: organizational commitment, resources, and flexibility. Descriptive statistics were used to capture the patterning of these dimensions by community sector. Organizational capacity varied widely across and within sectors. Chinese and Pan-Asian service sectors indicated higher capacity for Chinese immigrant worker health than did Chinese faith-based, labor union, public, and Pan-ethnic nonprofit sectors. Variation in organizational capacity in community sectors can inform selection of collaborators for community-based, immigrant worker health interventions. © 2017 Wiley Periodicals, Inc.

  12. Statistical description of turbulent transport for flux driven toroidal plasmas

    NASA Astrophysics Data System (ADS)

    Anderson, J.; Imadera, K.; Kishimoto, Y.; Li, J. Q.; Nordman, H.

    2017-06-01

    A novel methodology to analyze non-Gaussian probability distribution functions (PDFs) of intermittent turbulent transport in global full-f gyrokinetic simulations is presented. In this work, the auto-regressive integrated moving average (ARIMA) model is applied to time series data of intermittent turbulent heat transport to separate noise and oscillatory trends, allowing for the extraction of non-Gaussian features of the PDFs. It was shown that non-Gaussian tails of the PDFs from first principles based gyrokinetic simulations agree with an analytical estimation based on a two fluid model.

  13. Pennsylvania StreamStats--A web-based application for obtaining water-resource-related information

    USGS Publications Warehouse

    Stuckey, Marla H.; Hoffman, Scott A.

    2010-01-01

    StreamStats is a national web-based Geographic Information System (GIS) application, developed by the U.S. Geological Survey (USGS), in cooperation with Environmental Systems Research Institute, Inc., to provide a variety of water-resource-related information. Users can easily obtain descriptive information, basin characteristics, and streamflow statistics for USGS streamgages and ungaged stream locations throughout Pennsylvania. StreamStats also allows users to search upstream and (or) downstream from user-selected points to identify locations of and obtain information for water-resource-related activities, such as dams and streamgages.

  14. A study on phenomenology of Dhat syndrome in men in a general medical setting

    PubMed Central

    Prakash, Sathya; Sharan, Pratap; Sood, Mamta

    2016-01-01

    Background: “Dhat syndrome” is believed to be a culture-bound syndrome of the Indian subcontinent. Although many studies have been performed, many have methodological limitations and there is a lack of agreement in many areas. Aims: The aim is to study the phenomenology of “Dhat syndrome” in men and to explore the possibility of subtypes within this entity. Settings and Design: It is a cross-sectional descriptive study conducted at a sex and marriage counseling clinic of a tertiary care teaching hospital in Northern India. Materials and Methods: An operational definition and assessment instrument for “Dhat syndrome” was developed after taking all concerned stakeholders into account and review of literature. It was applied on 100 patients along with socio-demographic profile, Hamilton Depression Rating Scale, Hamilton Anxiety Rating Scale, Mini International Neuropsychiatric Interview, and Postgraduate Institute Neuroticism Scale. Statistical Analysis: For statistical analysis, descriptive statistics, group comparisons, and Pearson's product moment correlations were carried out. Factor analysis and cluster analysis were done to determine the factor structure and subtypes of “Dhat syndrome.” Results: A diagnostic and assessment instrument for “Dhat syndrome” has been developed and the phenomenology in 100 patients has been described. Both the health beliefs scale and associated symptoms scale demonstrated a three-factor structure. The patients with “Dhat syndrome” could be categorized into three clusters based on severity. Conclusions: There appears to be a significant agreement among various stakeholders on the phenomenology of “Dhat syndrome” although some differences exist. “Dhat syndrome” could be subtyped into three clusters based on severity. PMID:27385844

  15. Assessment and prediction of inter-joint upper limb movement correlations based on kinematic analysis and statistical regression

    NASA Astrophysics Data System (ADS)

    Toth-Tascau, Mirela; Balanean, Flavia; Krepelka, Mircea

    2013-10-01

    Musculoskeletal impairment of the upper limb can cause difficulties in performing basic daily activities. Three dimensional motion analyses can provide valuable data of arm movement in order to precisely determine arm movement and inter-joint coordination. The purpose of this study was to develop a method to evaluate the degree of impairment based on the influence of shoulder movements in the amplitude of elbow flexion and extension based on the assumption that a lack of motion of the elbow joint will be compensated by an increased shoulder activity. In order to develop and validate a statistical model, one healthy young volunteer has been involved in the study. The activity of choice simulated blowing the nose, starting from a slight flexion of the elbow and raising the hand until the middle finger touches the tip of the nose and return to the start position. Inter-joint coordination between the elbow and shoulder movements showed significant correlation. Statistical regression was used to fit an equation model describing the influence of shoulder movements on the elbow mobility. The study provides a brief description of the kinematic analysis protocol and statistical models that may be useful in describing the relation between inter-joint movements of daily activities.

  16. The Effect of Personality Traits of Managers/Supervisor on Job Satisfaction of Medical Sciences University Staffs.

    PubMed

    Abedi, G; Molazadeh-Mahali, Q A; Mirzaian, B; Nadi-Ghara, A; Heidari-Gorji, A M

    2016-01-01

    Todays people are spending most of their time life in their workplace therefore investigation for job satisfaction related factors is necessities of researches. The purpose of this research was to analyze the effect of manager's personality traits on employee job satisfaction. The present study is a descriptive and causative-comparative one utilized on a statistical sample of 44 managers and 119 employees. It was examined and analyzed through descriptive and inferential statistics of Student's t -test (independent T), one-way ANOVA, and Kolmogorov-Smirnov test. Findings showed that the managers and supervisors with personality traits of extraversion, eagerness to new experiences, adaptability, and dutifulness had higher subordinate employee job satisfaction. However, in the neurotic trait, the result was different. The results showed that job satisfaction was low in the aspect of neurosis. Based on this, it is suggested that, before any selection in managerial and supervisory positions, candidates receive a personality test and in case an individual has a neurotic trait, appropriate interference takes place both in this group and the employees' one.

  17. Comparing data collected by computerized and written surveys for adolescence health research.

    PubMed

    Wu, Ying; Newfield, Susan A

    2007-01-01

    This study assessed whether data-collection formats, computerized versus paper-and-pencil, affect response patterns and descriptive statistics for adolescent health assessment surveys. Youth were assessed as part of a health risk reduction program. Baseline data from 1131 youth were analyzed. Participants completed the questionnaire either by computer (n = 390) or by paper-and-pencil (n = 741). The rate of returned surveys meeting inclusion requirements was 90.6% and did not differ by methods. However, the computerized method resulted in significantly less incompleteness but more identical responses. Multiple regression indicated that the survey methods did not contribute to problematic responses. The two survey methods yielded similar scale internal reliability and descriptive statistics for behavioral and psychological outcomes, although the computerized method elicited higher reports of some risk items such as carrying a knife, beating up a person, selling drugs, and delivering drugs. Overall, the survey method did not produce a significant difference in outcomes. This provides support for program personnel selecting survey methods based on study goals with confidence that the method of administration will not have a significant impact on the outcome.

  18. Nonlocal transport in the presence of transport barriers

    NASA Astrophysics Data System (ADS)

    Del-Castillo-Negrete, D.

    2013-10-01

    There is experimental, numerical, and theoretical evidence that transport in plasmas can, under certain circumstances, depart from the standard local, diffusive description. Examples include fast pulse propagation phenomena in perturbative experiments, non-diffusive scaling in L-mode plasmas, and non-Gaussian statistics of fluctuations. From the theoretical perspective, non-diffusive transport descriptions follow from the relaxation of the restrictive assumptions (locality, scale separation, and Gaussian/Markovian statistics) at the foundation of diffusive models. We discuss an alternative class of models able to capture some of the observed non-diffusive transport phenomenology. The models are based on a class of nonlocal, integro-differential operators that provide a unifying framework to describe non- Fickian scale-free transport, and non-Markovian (memory) effects. We study the interplay between nonlocality and internal transport barriers (ITBs) in perturbative transport including cold edge pulses and power modulation. Of particular interest in the nonlocal ``tunnelling'' of perturbations through ITBs. Also, flux-gradient diagrams are discussed as diagnostics to detect nonlocal transport processes in numerical simulations and experiments. Work supported by the US Department of Energy.

  19. The Effect of Personality Traits of Managers/Supervisor on Job Satisfaction of Medical Sciences University Staffs

    PubMed Central

    Abedi, G; Molazadeh-Mahali, QA; Mirzaian, B; Nadi-Ghara, A; Heidari-Gorji, AM

    2016-01-01

    Background: Todays people are spending most of their time life in their workplace therefore investigation for job satisfaction related factors is necessities of researches. Aim: The purpose of this research was to analyze the effect of manager's personality traits on employee job satisfaction. Subjects and Methods: The present study is a descriptive and causative-comparative one utilized on a statistical sample of 44 managers and 119 employees. It was examined and analyzed through descriptive and inferential statistics of Student's t-test (independent T), one-way ANOVA, and Kolmogorov–Smirnov test. Results: Findings showed that the managers and supervisors with personality traits of extraversion, eagerness to new experiences, adaptability, and dutifulness had higher subordinate employee job satisfaction. However, in the neurotic trait, the result was different. Conclusion: The results showed that job satisfaction was low in the aspect of neurosis. Based on this, it is suggested that, before any selection in managerial and supervisory positions, candidates receive a personality test and in case an individual has a neurotic trait, appropriate interference takes place both in this group and the employees' one. PMID:28480099

  20. FAST COGNITIVE AND TASK ORIENTED, ITERATIVE DATA DISPLAY (FACTOID)

    DTIC Science & Technology

    2017-06-01

    approaches. As a result, the following assumptions guided our efforts in developing modeling and descriptive metrics for evaluation purposes...Application Evaluation . Our analytic workflow for evaluation is to first provide descriptive statistics about applications across metrics (performance...distributions for evaluation purposes because the goal of evaluation is accurate description , not inference (e.g., prediction). Outliers depicted

  1. Textual production of children without learning difficulties.

    PubMed

    Santos, Maria Aparecida Gonçalves dos; Hage, Simone Rocha de Vasconcellos

    2015-01-01

    To characterize the writing skills of students, to compare the performance of students in public and private schools, and to identify enhancements in the course of the school year. Three texts (narrative, game rules description, and a note or letter) written by 160 students from public and private schools were analyzed based on a specific protocol. Descriptive statistical analysis was performed. To compare the overall performance by the protocol between school grades, the Kruskal-Wallis and Miller tests were used, and to compare results as to schools (private and public), Mann-Whitney test was used. Median values of aesthetic aspects, coherence, clarity, and concision for game rules description among public school students remained one point below the top score. Students from private schools achieved the highest score at medians. When comparing schools, private institutions had students with better performances, with significant difference. As to grades, statistical difference was found between the fourth and sixth grades of public schools and between the fourth and fifth grades of private schools. Most of the private school children showed consolidation of skills assessed in the different grades. However, public school children had this consolidation only at the sixth grade. Students from private schools had better performances compared to those from public schools. There is tendency to evolution from the fourth to sixth grades in public schools. However, the overall performance is similar in all grades in private schools.

  2. Mathematical and Statistical Software Index. Final Report.

    ERIC Educational Resources Information Center

    Black, Doris E., Comp.

    Brief descriptions are provided of general-purpose mathematical and statistical software, including 27 "stand-alone" programs, three subroutine systems, and two nationally recognized statistical packages, which are available in the Air Force Human Resources Laboratory (AFHRL) software library. This index was created to enable researchers…

  3. Education Statistics Quarterly, Spring 2001.

    ERIC Educational Resources Information Center

    Education Statistics Quarterly, 2001

    2001-01-01

    The "Education Statistics Quarterly" gives a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data products and funding opportunities developed over a 3-month period. Each issue…

  4. Parametric study of the dynamic JWL-EOS for detonation products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urtiew, P.A.; Hayes, B.

    1990-03-01

    The JWL equation of state describing the adiabatic expansion of detonation products is revisited to complete the description of the principal eigenvalue, to reset the secondary eigenvalue to produce a well-behaved adiabatic gamma profile, and to normalize the characteristic equation of state in terms of conventional parameters having a clear experimental interpretation. This is accomplished by interjecting a dynamic flow condition concerning the value of the relative specific volume when the particle velocity of the detonation products is zero. In addition, a set of generic parameters based on the statistical distribution of the primary explosives making up the available datamore » base is presented. Unlike theoretical and statistical mechanical models, the adiabatic gamma function for these materials is seen to have a positive initial slope in accord with experimental findings. 10 refs., 4 figs.« less

  5. Single, Complete, Probability Spaces Consistent With EPR-Bohm-Bell Experimental Data

    NASA Astrophysics Data System (ADS)

    Avis, David; Fischer, Paul; Hilbert, Astrid; Khrennikov, Andrei

    2009-03-01

    We show that paradoxical consequences of violations of Bell's inequality are induced by the use of an unsuitable probabilistic description for the EPR-Bohm-Bell experiment. The conventional description (due to Bell) is based on a combination of statistical data collected for different settings of polarization beam splitters (PBSs). In fact, such data consists of some conditional probabilities which only partially define a probability space. Ignoring this conditioning leads to apparent contradictions in the classical probabilistic model (due to Kolmogorov). We show how to make a completely consistent probabilistic model by taking into account the probabilities of selecting the settings of the PBSs. Our model matches both the experimental data and is consistent with classical probability theory.

  6. Methodological considerations for the evaluation of EEG mapping data: a practical example based on a placebo/diazepam crossover trial.

    PubMed

    Jähnig, P; Jobert, M

    1995-01-01

    Quantitative EEG is a sensitive method for measuring pharmacological effects on the central nervous system. Nowadays, computers enable EEG data to be stored and spectral parameters to be computed for signals obtained from a large number of electrode locations. However, the statistical analysis of such vast amounts of EEG data is complicated due to the limited number of subjects usually involved in pharmacological studies. In the present study, data from a trial aimed at comparing diazepam and placebo were used to investigate different properties of EEG mapping data and to compare different methods of data analysis. Both the topography and the temporal changes of EEG activity were investigated using descriptive data analysis, which is based on an inspection of patterns of pd values (descriptive p values) assessed for all pair-wise tests for differences in time or treatment. An empirical measure (tri-mean) for the computation of group maps is suggested, allowing a better description of group effects with skewed data of small samples size. Finally, both the investigation of maps based on principal component analysis and the notion of distance between maps are discussed and applied to the analysis of the data collected under diazepam treatment, exemplifying the evaluation of pharmacodynamic drug effects.

  7. Radar derived spatial statistics of summer rain. Volume 1: Experiment description

    NASA Technical Reports Server (NTRS)

    Katz, I.; Arnold, A.; Goldhirsh, J.; Konrad, T. G.; Vann, W. L.; Dobson, E. B.; Rowland, J. R.

    1975-01-01

    An experiment was performed at Wallops Island, Virginia, to obtain a statistical description of summer rainstorms. Its purpose was to obtain information needed for design of earth and space communications systems in which precipitation in the earth's atmosphere scatters or attenuates the radio signal. Rainstorms were monitored with the high resolution SPANDAR radar and the 3-dimensional structures of the storms were recorded on digital tape. The equipment, the experiment, and tabulated data obtained during the experiment are described.

  8. Preparing for the first meeting with a statistician.

    PubMed

    De Muth, James E

    2008-12-15

    Practical statistical issues that should be considered when performing data collection and analysis are reviewed. The meeting with a statistician should take place early in the research development before any study data are collected. The process of statistical analysis involves establishing the research question, formulating a hypothesis, selecting an appropriate test, sampling correctly, collecting data, performing tests, and making decisions. Once the objectives are established, the researcher can determine the characteristics or demographics of the individuals required for the study, how to recruit volunteers, what type of data are needed to answer the research question(s), and the best methods for collecting the required information. There are two general types of statistics: descriptive and inferential. Presenting data in a more palatable format for the reader is called descriptive statistics. Inferential statistics involve making an inference or decision about a population based on results obtained from a sample of that population. In order for the results of a statistical test to be valid, the sample should be representative of the population from which it is drawn. When collecting information about volunteers, researchers should only collect information that is directly related to the study objectives. Important information that a statistician will require first is an understanding of the type of variables involved in the study and which variables can be controlled by researchers and which are beyond their control. Data can be presented in one of four different measurement scales: nominal, ordinal, interval, or ratio. Hypothesis testing involves two mutually exclusive and exhaustive statements related to the research question. Statisticians should not be replaced by computer software, and they should be consulted before any research data are collected. When preparing to meet with a statistician, the pharmacist researcher should be familiar with the steps of statistical analysis and consider several questions related to the study to be conducted.

  9. Meteor localization via statistical analysis of spatially temporal fluctuations in image sequences

    NASA Astrophysics Data System (ADS)

    Kukal, Jaromír.; Klimt, Martin; Šihlík, Jan; Fliegel, Karel

    2015-09-01

    Meteor detection is one of the most important procedures in astronomical imaging. Meteor path in Earth's atmosphere is traditionally reconstructed from double station video observation system generating 2D image sequences. However, the atmospheric turbulence and other factors cause spatially-temporal fluctuations of image background, which makes the localization of meteor path more difficult. Our approach is based on nonlinear preprocessing of image intensity using Box-Cox and logarithmic transform as its particular case. The transformed image sequences are then differentiated along discrete coordinates to obtain statistical description of sky background fluctuations, which can be modeled by multivariate normal distribution. After verification and hypothesis testing, we use the statistical model for outlier detection. Meanwhile the isolated outlier points are ignored, the compact cluster of outliers indicates the presence of meteoroids after ignition.

  10. Distinguishing Man from Molecules: The Distinctiveness of Medical Concepts at Different Levels of Description

    PubMed Central

    Cole, William G.; Michael, Patricia; Blois, Marsden S.

    1987-01-01

    A computer program was created to use information about the statistical distribution of words in journal abstracts to make probabilistic judgments about the level of description (e.g. molecular, cell, organ) of medical text. Statistical analysis of 7,409 journal abstracts taken from three medical journals representing distinct levels of description revealed that many medical words seem to be highly specific to one or another level of description. For example, the word adrenoreceptors occurred only in the American Journal of Physiology, never in Journal of Biological Chemistry or in Journal of American Medical Association. Such highly specific words occured so frequently that the automatic classification program was able to classify correctly 45 out of 45 test abstracts, with 100% confidence. These findings are interpreted in terms of both a theory of the structure of medical knowledge and the pragmatics of automatic classification.

  11. Research awareness: An important factor for evidence-based practice?

    PubMed

    McSherry, Robert; Artley, Angela; Holloran, Jan

    2006-01-01

    Despite the growing body of literature, the reality of getting evidence into practice remains problematic. The purpose of this study was to establish levels of research awareness amongst registered health care professionals (RHCPs) and the influence of research awareness on evidence-based practice activities. This was a descriptive quantitative study. A convenience sample of 2,126 registered RHCPs working in a large acute hospital in Northeast England, the United Kingdom was used. A self-completion Research Awareness Questionnaire (RAQ) was directed towards measuring RHCP: attitudes towards research, understanding of research and the research process, and associations with practising using an evidence base. Data were entered into a Statistical Package for Social Science (SPSS) database and descriptive and inferential statistics were used. A total of 843 questionnaires were returned. Seven hundred and thirty-three (91%) RHCPs overwhelmingly agreed with the principle that evidence-based practice has a large part to play in improving patient care. This point was reinforced by 86% (n = 701) of respondents strongly agreeing or agreeing with the idea that evidence-based practice is the way forward to change clinical practice. Significant associations were noted between levels of confidence to undertake a piece of research and whether the individual had received adequate information about the research process, had basic knowledge and understanding of the research process, or had research awareness education or training. The study shows that RHCPs, regardless of position or grade, have a positive attitude towards research but face many obstacles. The key obstacles are lack of time, support, knowledge, and confidence. To address these obstacles, it is imperative that the organisation adopts a structured and coordinated approach to enable and empower individuals to practice using an evidence base.

  12. 78 FR 34101 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-06

    ... and basic descriptive statistics on the quantity and type of consumer-reported patient safety events... conduct correlations, cross tabulations of responses and other statistical analysis. Estimated Annual...

  13. A General Procedure to Assess the Internal Structure of a Noncognitive Measure--The Student360 Insight Program (S360) Time Management Scale. Research Report. ETS RR-11-42

    ERIC Educational Resources Information Center

    Ling, Guangming; Rijmen, Frank

    2011-01-01

    The factorial structure of the Time Management (TM) scale of the Student 360: Insight Program (S360) was evaluated based on a national sample. A general procedure with a variety of methods was introduced and implemented, including the computation of descriptive statistics, exploratory factor analysis (EFA), and confirmatory factor analysis (CFA).…

  14. Research design and statistical methods in Pakistan Journal of Medical Sciences (PJMS).

    PubMed

    Akhtar, Sohail; Shah, Syed Wadood Ali; Rafiq, M; Khan, Ajmal

    2016-01-01

    This article compares the study design and statistical methods used in 2005, 2010 and 2015 of Pakistan Journal of Medical Sciences (PJMS). Only original articles of PJMS were considered for the analysis. The articles were carefully reviewed for statistical methods and designs, and then recorded accordingly. The frequency of each statistical method and research design was estimated and compared with previous years. A total of 429 articles were evaluated (n=74 in 2005, n=179 in 2010, n=176 in 2015) in which 171 (40%) were cross-sectional and 116 (27%) were prospective study designs. A verity of statistical methods were found in the analysis. The most frequent methods include: descriptive statistics (n=315, 73.4%), chi-square/Fisher's exact tests (n=205, 47.8%) and student t-test (n=186, 43.4%). There was a significant increase in the use of statistical methods over time period: t-test, chi-square/Fisher's exact test, logistic regression, epidemiological statistics, and non-parametric tests. This study shows that a diverse variety of statistical methods have been used in the research articles of PJMS and frequency improved from 2005 to 2015. However, descriptive statistics was the most frequent method of statistical analysis in the published articles while cross-sectional study design was common study design.

  15. 2012 aerospace medical certification statistical handbook.

    DOT National Transportation Integrated Search

    2013-12-01

    The annual Aerospace Medical Certification Statistical Handbook reports descriptive : characteristics of all active U.S. civil aviation airmen and the aviation medical examiners (AMEs) that : perform the required medical examinations. The 2012 annual...

  16. Analysis of Statistical Methods Currently used in Toxicology Journals

    PubMed Central

    Na, Jihye; Yang, Hyeri

    2014-01-01

    Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health. PMID:25343012

  17. Analysis of Statistical Methods Currently used in Toxicology Journals.

    PubMed

    Na, Jihye; Yang, Hyeri; Bae, SeungJin; Lim, Kyung-Min

    2014-09-01

    Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health.

  18. Manifold parametrization of the left ventricle for a statistical modelling of its complete anatomy

    NASA Astrophysics Data System (ADS)

    Gil, D.; Garcia-Barnes, J.; Hernández-Sabate, A.; Marti, E.

    2010-03-01

    Distortion of Left Ventricle (LV) external anatomy is related to some dysfunctions, such as hypertrophy. The architecture of myocardial fibers determines LV electromechanical activation patterns as well as mechanics. Thus, their joined modelling would allow the design of specific interventions (such as peacemaker implantation and LV remodelling) and therapies (such as resynchronization). On one hand, accurate modelling of external anatomy requires either a dense sampling or a continuous infinite dimensional approach, which requires non-Euclidean statistics. On the other hand, computation of fiber models requires statistics on Riemannian spaces. Most approaches compute separate statistical models for external anatomy and fibers architecture. In this work we propose a general mathematical framework based on differential geometry concepts for computing a statistical model including, both, external and fiber anatomy. Our framework provides a continuous approach to external anatomy supporting standard statistics. We also provide a straightforward formula for the computation of the Riemannian fiber statistics. We have applied our methodology to the computation of complete anatomical atlas of canine hearts from diffusion tensor studies. The orientation of fibers over the average external geometry agrees with the segmental description of orientations reported in the literature.

  19. Introduction

    USDA-ARS?s Scientific Manuscript database

    The introduction to the second edition of the Compendium of Apple and Pear Diseases contains a general description of genus and species of commercial importance, some general information about growth and fruiting habits as well as recent production statistics. A general description of major scion c...

  20. Ontology-based, Tissue MicroArray oriented, image centered tissue bank

    PubMed Central

    Viti, Federica; Merelli, Ivan; Caprera, Andrea; Lazzari, Barbara; Stella, Alessandra; Milanesi, Luciano

    2008-01-01

    Background Tissue MicroArray technique is becoming increasingly important in pathology for the validation of experimental data from transcriptomic analysis. This approach produces many images which need to be properly managed, if possible with an infrastructure able to support tissue sharing between institutes. Moreover, the available frameworks oriented to Tissue MicroArray provide good storage for clinical patient, sample treatment and block construction information, but their utility is limited by the lack of data integration with biomolecular information. Results In this work we propose a Tissue MicroArray web oriented system to support researchers in managing bio-samples and, through the use of ontologies, enables tissue sharing aimed at the design of Tissue MicroArray experiments and results evaluation. Indeed, our system provides ontological description both for pre-analysis tissue images and for post-process analysis image results, which is crucial for information exchange. Moreover, working on well-defined terms it is then possible to query web resources for literature articles to integrate both pathology and bioinformatics data. Conclusions Using this system, users associate an ontology-based description to each image uploaded into the database and also integrate results with the ontological description of biosequences identified in every tissue. Moreover, it is possible to integrate the ontological description provided by the user with a full compliant gene ontology definition, enabling statistical studies about correlation between the analyzed pathology and the most commonly related biological processes. PMID:18460177

  1. 2011 aerospace medical certification statistical handbook.

    DOT National Transportation Integrated Search

    2013-01-01

    The annual Aerospace Medical Certification Statistical Handbook reports descriptive characteristics of all active U.S. civil aviation airmen and the aviation medical examiners (AMEs) that perform the required medical examinations. The 2011 annual han...

  2. Large truck crash facts 2005

    DOT National Transportation Integrated Search

    2007-02-01

    This annual edition of Large Truck Crash Facts contains descriptive statistics about fatal, injury, and property damage only crashes involving large trucks in 2005. Selected crash statistics on passenger vehicles are also presented for comparison pur...

  3. Methods in pharmacoepidemiology: a review of statistical analyses and data reporting in pediatric drug utilization studies.

    PubMed

    Sequi, Marco; Campi, Rita; Clavenna, Antonio; Bonati, Maurizio

    2013-03-01

    To evaluate the quality of data reporting and statistical methods performed in drug utilization studies in the pediatric population. Drug utilization studies evaluating all drug prescriptions to children and adolescents published between January 1994 and December 2011 were retrieved and analyzed. For each study, information on measures of exposure/consumption, the covariates considered, descriptive and inferential analyses, statistical tests, and methods of data reporting was extracted. An overall quality score was created for each study using a 12-item checklist that took into account the presence of outcome measures, covariates of measures, descriptive measures, statistical tests, and graphical representation. A total of 22 studies were reviewed and analyzed. Of these, 20 studies reported at least one descriptive measure. The mean was the most commonly used measure (18 studies), but only five of these also reported the standard deviation. Statistical analyses were performed in 12 studies, with the chi-square test being the most commonly performed test. Graphs were presented in 14 papers. Sixteen papers reported the number of drug prescriptions and/or packages, and ten reported the prevalence of the drug prescription. The mean quality score was 8 (median 9). Only seven of the 22 studies received a score of ≥10, while four studies received a score of <6. Our findings document that only a few of the studies reviewed applied statistical methods and reported data in a satisfactory manner. We therefore conclude that the methodology of drug utilization studies needs to be improved.

  4. Antecedents to Organizational Performance: Theoretical and Practical Implications for Aircraft Maintenance Officer Force Development

    DTIC Science & Technology

    2015-03-26

    to my reader, Lieutenant Colonel Robert Overstreet, for helping solidify my research, coaching me through the statistical analysis, and positive...61  Descriptive Statistics .............................................................................................................. 61...common-method bias requires careful assessment of potential sources of bias and implementing procedural and statistical control methods. Podsakoff

  5. Using Facebook Data to Turn Introductory Statistics Students into Consultants

    ERIC Educational Resources Information Center

    Childers, Adam F.

    2017-01-01

    Facebook provides businesses and organizations with copious data that describe how users are interacting with their page. This data affords an excellent opportunity to turn introductory statistics students into consultants to analyze the Facebook data using descriptive and inferential statistics. This paper details a semester-long project that…

  6. ALISE Library and Information Science Education Statistical Report, 1999.

    ERIC Educational Resources Information Center

    Daniel, Evelyn H., Ed.; Saye, Jerry D., Ed.

    This volume is the twentieth annual statistical report on library and information science (LIS) education published by the Association for Library and Information Science Education (ALISE). Its purpose is to compile, analyze, interpret, and report statistical (and other descriptive) information about library/information science programs offered by…

  7. Statistical summaries of ground-water level data collected in the Suwannee River Water Management District, 1948 to 1994

    USGS Publications Warehouse

    Collins, J.J.; Freeman, L.D.

    1996-01-01

    Since 1948, ground-water level data have beensystematically collected from selected wells in theSuwannee River Water Management District (SRWMD) by the U.S. Geological Survey (USGS),the SRWMD, and other agencies. Records of waterlevels in the SRWMD (fig. 1), collected by the USGS and SRWMD through 1990, and by the SRWMD from 1990 to 1994, have been published for many years in the USGS annual report series "Water Resources Data for Florida." However, no systematic statistical summaries of water levels in the SRWMD have been previously published. The need for such statistical summary data forevaluations of drought severity, ground-water supplyavailability, and minimum water levels for regulatory purposes increases daily as demands for ground-water usage increase. Also, much of the base flow of the Suwannee River is dependent upon ground water. As the population and demand for ground water for drinking water and irrigation purposes increase, the ability to quickly and easily predict trends in ground-water availability will become paramount. In response to this need, the USGS, in cooperation with the SRWMD, compiled this report. Ground-water sta tistics for 136 sites are presented as well as figures showing water levels that were measured in wells from 1948 through September 1994. In 1994, the SRWMD and the USGS began a long- term program of cooperative studies designed tobetter understand minimum and maximum streamflows and ground-water levels in the SRWMD. Minimum and maximum flows and levels are needed by the district to manage the surface- and ground-water resources of the SRWMD and to maintain or improve the various ecosystems. Data evaluation was a necessary first step in the long- term SRWMD ground-water investigations program, because basic statistics for ground-water levels are not included in the USGS annual data reports such as "Water Resources Data for Florida, Water Year 1994" (Fran klin and others, 1995). Statistics included in this report were generated using the USGS computer pro gram ADAPS (Automatic Data Processing System) to characterize normal ground-water levels and depar tures from normal. The report has been organized so that the statisti cal analyses of water levels in the wells are presentedfollowing this introductory material, a description ofthe hydrogeology in the study area, and a description of the statistics used to present the water-level data. Specifically, the report presents statistical analyses for each well, as appropriate, in the following manner: Description of the well.Hydrographs of ground-water levels for the period of record, for the last 10 years of record, and for the last 5 years of record. Graphs of maximum, minimum, and mean of monthly mean ground-water levels for wells with 5 or more years of record.Frequency hydrographs (25, 50, and 75 percent) of monthly mean ground-water levels for wells with 5 or more years of record. Water-level data and statistical plots are grouped by county and sorted within the county by ascendingsite identification number. Well locations are plottedon county maps preceding the well descriptions andhydrographs.

  8. Statistical Tutorial | Center for Cancer Research

    Cancer.gov

    Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data.  ST is designed as a follow up to Statistical Analysis of Research Data (SARD) held in April 2018.  The tutorial will apply the general principles of statistical analysis of research data including descriptive statistics, z- and t-tests of means and mean differences, simple and multiple linear regression, ANOVA tests, and Chi-Squared distribution.

  9. A knowledge-based potential with an accurate description of local interactions improves discrimination between native and near-native protein conformations.

    PubMed

    Ferrada, Evandro; Vergara, Ismael A; Melo, Francisco

    2007-01-01

    The correct discrimination between native and near-native protein conformations is essential for achieving accurate computer-based protein structure prediction. However, this has proven to be a difficult task, since currently available physical energy functions, empirical potentials and statistical scoring functions are still limited in achieving this goal consistently. In this work, we assess and compare the ability of different full atom knowledge-based potentials to discriminate between native protein structures and near-native protein conformations generated by comparative modeling. Using a benchmark of 152 near-native protein models and their corresponding native structures that encompass several different folds, we demonstrate that the incorporation of close non-bonded pairwise atom terms improves the discriminating power of the empirical potentials. Since the direct and unbiased derivation of close non-bonded terms from current experimental data is not possible, we obtained and used those terms from the corresponding pseudo-energy functions of a non-local knowledge-based potential. It is shown that this methodology significantly improves the discrimination between native and near-native protein conformations, suggesting that a proper description of close non-bonded terms is important to achieve a more complete and accurate description of native protein conformations. Some external knowledge-based energy functions that are widely used in model assessment performed poorly, indicating that the benchmark of models and the specific discrimination task tested in this work constitutes a difficult challenge.

  10. 50 CFR Figure 1 to Part 679 - Bering Sea and Aleutian Islands Statistical and Reporting Areas

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Statistical and Reporting Areas 1 Figure 1 to Part 679 Wildlife and Fisheries FISHERY CONSERVATION AND... Islands Statistical and Reporting Areas ER15NO99.000 b. Coordinates Code Description 300 Russian waters... statistical area is the part of a reporting area contained in the EEZ. [64 FR 61983, Nov. 15, 1999; 65 FR...

  11. 50 CFR Figure 1 to Part 679 - Bering Sea and Aleutian Islands Statistical and Reporting Areas

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Statistical and Reporting Areas 1 Figure 1 to Part 679 Wildlife and Fisheries FISHERY CONSERVATION AND... Islands Statistical and Reporting Areas ER15NO99.000 b. Coordinates Code Description 300 Russian waters... statistical area is the part of a reporting area contained in the EEZ. [64 FR 61983, Nov. 15, 1999; 65 FR...

  12. Evaluating statistical cloud schemes: What can we gain from ground-based remote sensing?

    NASA Astrophysics Data System (ADS)

    Grützun, V.; Quaas, J.; Morcrette, C. J.; Ament, F.

    2013-09-01

    Statistical cloud schemes with prognostic probability distribution functions have become more important in atmospheric modeling, especially since they are in principle scale adaptive and capture cloud physics in more detail. While in theory the schemes have a great potential, their accuracy is still questionable. High-resolution three-dimensional observational data of water vapor and cloud water, which could be used for testing them, are missing. We explore the potential of ground-based remote sensing such as lidar, microwave, and radar to evaluate prognostic distribution moments using the "perfect model approach." This means that we employ a high-resolution weather model as virtual reality and retrieve full three-dimensional atmospheric quantities and virtual ground-based observations. We then use statistics from the virtual observation to validate the modeled 3-D statistics. Since the data are entirely consistent, any discrepancy occurring is due to the method. Focusing on total water mixing ratio, we find that the mean ratio can be evaluated decently but that it strongly depends on the meteorological conditions as to whether the variance and skewness are reliable. Using some simple schematic description of different synoptic conditions, we show how statistics obtained from point or line measurements can be poor at representing the full three-dimensional distribution of water in the atmosphere. We argue that a careful analysis of measurement data and detailed knowledge of the meteorological situation is necessary to judge whether we can use the data for an evaluation of higher moments of the humidity distribution used by a statistical cloud scheme.

  13. Evidence-based practice knowledge, attitudes, and practice of online graduate nursing students.

    PubMed

    Rojjanasrirat, Wilaiporn; Rice, Jan

    2017-06-01

    This study aimed to evaluate changes in evidence-based practice (EBP) knowledge, attitudes, and practice of nursing students before and after completing an online, graduate level, introductory research/EBP course. A prospective one-group pretest-posttest design. A private university in the Midwestern, USA. Sixty-three online nurse practitioner students in Master's program. A convenient sample of online graduate nursing students who enrolled in the research/EBP course was invited to participate in the study. Study outcomes were measured using the Evidence-Based Practice Questionnaire (EBPQ) before and after completing the course. Descriptive statistics and paired-Samples t-test was used to assess the mean differences between pre-and post-test scores. Overall, students' post-test EBP scores were significantly improved over pre-test scores, t(63)=-9.034, p<0.001). Statistically significant differences were found for practice of EBP mean scores t(63)=-12.78, p=0.001). No significant differences were found between pre and post-tests on knowledge and attitudes toward EBP scores. Most frequently cited barriers to EBP were lack of understanding of statistics, interpretation of findings, lack of time, and lack of library resources. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. The writer independent online handwriting recognition system frog on hand and cluster generative statistical dynamic time warping.

    PubMed

    Bahlmann, Claus; Burkhardt, Hans

    2004-03-01

    In this paper, we give a comprehensive description of our writer-independent online handwriting recognition system frog on hand. The focus of this work concerns the presentation of the classification/training approach, which we call cluster generative statistical dynamic time warping (CSDTW). CSDTW is a general, scalable, HMM-based method for variable-sized, sequential data that holistically combines cluster analysis and statistical sequence modeling. It can handle general classification problems that rely on this sequential type of data, e.g., speech recognition, genome processing, robotics, etc. Contrary to previous attempts, clustering and statistical sequence modeling are embedded in a single feature space and use a closely related distance measure. We show character recognition experiments of frog on hand using CSDTW on the UNIPEN online handwriting database. The recognition accuracy is significantly higher than reported results of other handwriting recognition systems. Finally, we describe the real-time implementation of frog on hand on a Linux Compaq iPAQ embedded device.

  15. Using Network Analysis to Characterize Biogeographic Data in a Community Archive

    NASA Astrophysics Data System (ADS)

    Wellman, T. P.; Bristol, S.

    2017-12-01

    Informative measures are needed to evaluate and compare data from multiple providers in a community-driven data archive. This study explores insights from network theory and other descriptive and inferential statistics to examine data content and application across an assemblage of publically available biogeographic data sets. The data are archived in ScienceBase, a collaborative catalog of scientific data supported by the U.S Geological Survey to enhance scientific inquiry and acuity. In gaining understanding through this investigation and other scientific venues our goal is to improve scientific insight and data use across a spectrum of scientific applications. Network analysis is a tool to reveal patterns of non-trivial topological features in the data that do not exhibit complete regularity or randomness. In this work, network analyses are used to explore shared events and dependencies between measures of data content and application derived from metadata and catalog information and measures relevant to biogeographic study. Descriptive statistical tools are used to explore relations between network analysis properties, while inferential statistics are used to evaluate the degree of confidence in these assessments. Network analyses have been used successfully in related fields to examine social awareness of scientific issues, taxonomic structures of biological organisms, and ecosystem resilience to environmental change. Use of network analysis also shows promising potential to identify relationships in biogeographic data that inform programmatic goals and scientific interests.

  16. The Performance of Preparatory School Candidates at the United States Naval Academy

    DTIC Science & Technology

    2001-09-01

    79 1. Differences in Characteristics .....................................................79 2. Differences in...Coefficients ......................................42 Table 3.3 Applicant/Midshipman Background Characteristics ...45 Table 3.4 Descriptive Characteristics for Midshipmen by Accession Source .................46 Table 3.5 Descriptive Statistics for

  17. Paradigm shifts and the development of the diagnostic and statistical manual of mental disorders: past experiences and future aspirations.

    PubMed

    First, Michael B

    2010-11-01

    Work is currently under way on the Diagnostic and Statistical Manual of Mental Disorders (DSM), Fifth Edition, due to be published by the American Psychiatric Association in 2013. Dissatisfaction with the current categorical descriptive approach has led to aspirations for a paradigm shift for DSM-5. A historical review of past revisions of the DSM was performed. Efforts undertaken before the start of the DSM-5 development process to conduct a state-of-the science review and set a research agenda were examined to determine if results supported a paradigm shift for DSM-5. Proposals to supplement DSM-5 categorical diagnosis with dimensional assessments are reviewed and critiqued. DSM revisions have alternated between paradigm shifts (the first edition of the DSM in 1952 and DSM-III in 1980) and incremental improvements (DSM-II in 1968, DSM-III-R in 1987, and DSM-IV in 1994). The results of the review of the DSM-5 research planning initiatives suggest that despite the scientific advances that have occurred since the descriptive approach was first introduced in 1980, the field lacks a sufficiently deep understanding of mental disorders to justify abandoning the descriptive approach in favour of a more etiologically based alternative. Proposals to add severity and cross-cutting dimensions throughout DSM-5 are neither paradigm shifting, given that simpler versions of such dimensions are already a component of DSM-IV, nor likely to be used by busy clinicians without evidence that they improve clinical outcomes. Despite initial aspirations that DSM would undergo a paradigm shift with this revision, DSM-5 will continue to adopt a descriptive categorical approach, albeit with a greatly expanded dimensional component.

  18. An R package for analyzing and modeling ranking data

    PubMed Central

    2013-01-01

    Background In medical informatics, psychology, market research and many other fields, researchers often need to analyze and model ranking data. However, there is no statistical software that provides tools for the comprehensive analysis of ranking data. Here, we present pmr, an R package for analyzing and modeling ranking data with a bundle of tools. The pmr package enables descriptive statistics (mean rank, pairwise frequencies, and marginal matrix), Analytic Hierarchy Process models (with Saaty’s and Koczkodaj’s inconsistencies), probability models (Luce model, distance-based model, and rank-ordered logit model), and the visualization of ranking data with multidimensional preference analysis. Results Examples of the use of package pmr are given using a real ranking dataset from medical informatics, in which 566 Hong Kong physicians ranked the top five incentives (1: competitive pressures; 2: increased savings; 3: government regulation; 4: improved efficiency; 5: improved quality care; 6: patient demand; 7: financial incentives) to the computerization of clinical practice. The mean rank showed that item 4 is the most preferred item and item 3 is the least preferred item, and significance difference was found between physicians’ preferences with respect to their monthly income. A multidimensional preference analysis identified two dimensions that explain 42% of the total variance. The first can be interpreted as the overall preference of the seven items (labeled as “internal/external”), and the second dimension can be interpreted as their overall variance of (labeled as “push/pull factors”). Various statistical models were fitted, and the best were found to be weighted distance-based models with Spearman’s footrule distance. Conclusions In this paper, we presented the R package pmr, the first package for analyzing and modeling ranking data. The package provides insight to users through descriptive statistics of ranking data. Users can also visualize ranking data by applying a thought multidimensional preference analysis. Various probability models for ranking data are also included, allowing users to choose that which is most suitable to their specific situations. PMID:23672645

  19. An R package for analyzing and modeling ranking data.

    PubMed

    Lee, Paul H; Yu, Philip L H

    2013-05-14

    In medical informatics, psychology, market research and many other fields, researchers often need to analyze and model ranking data. However, there is no statistical software that provides tools for the comprehensive analysis of ranking data. Here, we present pmr, an R package for analyzing and modeling ranking data with a bundle of tools. The pmr package enables descriptive statistics (mean rank, pairwise frequencies, and marginal matrix), Analytic Hierarchy Process models (with Saaty's and Koczkodaj's inconsistencies), probability models (Luce model, distance-based model, and rank-ordered logit model), and the visualization of ranking data with multidimensional preference analysis. Examples of the use of package pmr are given using a real ranking dataset from medical informatics, in which 566 Hong Kong physicians ranked the top five incentives (1: competitive pressures; 2: increased savings; 3: government regulation; 4: improved efficiency; 5: improved quality care; 6: patient demand; 7: financial incentives) to the computerization of clinical practice. The mean rank showed that item 4 is the most preferred item and item 3 is the least preferred item, and significance difference was found between physicians' preferences with respect to their monthly income. A multidimensional preference analysis identified two dimensions that explain 42% of the total variance. The first can be interpreted as the overall preference of the seven items (labeled as "internal/external"), and the second dimension can be interpreted as their overall variance of (labeled as "push/pull factors"). Various statistical models were fitted, and the best were found to be weighted distance-based models with Spearman's footrule distance. In this paper, we presented the R package pmr, the first package for analyzing and modeling ranking data. The package provides insight to users through descriptive statistics of ranking data. Users can also visualize ranking data by applying a thought multidimensional preference analysis. Various probability models for ranking data are also included, allowing users to choose that which is most suitable to their specific situations.

  20. Preparation of the Professional Athletic Trainer: A Descriptive Study of Undergraduate and Graduate Degree Programs.

    PubMed

    Cavallario, Julie M; Van Lunen, Bonnie L

    2015-07-01

    The examination of the appropriate professional degree for preparation as an athletic trainer is of interest to the profession. Descriptive information concerning universal outcomes is needed to understand the effect of a degree change. To obtain and compare descriptive information related to professional athletic training programs and a potential degree change and to determine if any of these factors contribute to success on existing universal outcome measures. Cross-sectional study. Web-based survey. We contacted 364 program directors; 178 (48.9%; 163 undergraduate, 15 postbaccalaureate) responded. The survey consisted of 46 questions: 45 questions that dealt with 5 themes (institutional demographics [n = 13], program admissions [n = 6], program outcomes [n = 10], program design [n = 9], faculty and staff [n = 7]) and 1 optional question. Descriptive statistics for all programs were calculated. We compared undergraduate and postbaccalaureate programs by examining universal outcome variables. Descriptive statistics demonstrated that 33 programs could not support postbaccalaureate degrees, and a substantial loss of faculty could occur if the degree requirement changed (553 graduate assistants, 642 potentially underqualified instructors). Postbaccalaureate professional programs had higher 2011-2012 first-time Board of Certification (BOC) passing rates (U = 464.5, P = .001), 3-year aggregate first-time BOC passing rates (U = 451.5, P = .001), and employment rates for 2011-2012 graduates employed within athletic training (U = 614.0, P = .01). Linear multiple-regression models demonstrated that program and institution type contributed to the variance of the first-time BOC passing rates and the 3-year aggregate first-time BOC passing rates (P < .05). Students in postbaccalaureate athletic training programs performed better in universal outcome measures. Our data supported the concerns that this transition could result in the loss of some programs and an additional immediate strain on current staff due to potential staffing changes and the loss of graduate assistant positions.

  1. Trends in motor vehicle traffic collision statistics, 1988-1997

    DOT National Transportation Integrated Search

    2001-02-01

    This report presents descriptive statistics about Canadian traffic collisions during the ten-year period : from 1988 to 1997, focusing specifically on casualty collisions. Casualty collisions are defined as all : reportable motor vehicle crashes resu...

  2. 77 FR 10695 - Federal Housing Administration (FHA) Risk Management Initiatives: Revised Seller Concessions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-23

    ... in Tables A and B. Table D--Borrower Closing Costs and Seller Concessions Descriptive Statistics by... accuracy of the statistical data illustrating the correlation between higher seller concessions and an...

  3. 42 CFR 402.7 - Notice of proposed determination.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... and a brief description of the statistical sampling technique CMS or OIG used. (3) The reason why the... is relying upon statistical sampling to project the number and types of claims or requests for...

  4. [Dermatoglyphics in the prognostication of constitutional and physical traits in humans].

    PubMed

    Mazur, E S; Sidorenko, A G

    2009-01-01

    The present study was designed to elucidate the relationship between palmar and digital dermatoglyphic patterns and descriptive signs of human appearance based on the results of comprehensive anthropometric examination of 2620 men and 380 women. A battery of different methods were used to statistically treat the results of dactyloscopic records. They demonstrated correlation between skin patterns and external body features that can be used to construct diagnostic models for the purpose of personality identification.

  5. Thermodynamic evolution far from equilibrium

    NASA Astrophysics Data System (ADS)

    Khantuleva, Tatiana A.

    2018-05-01

    The presented model of thermodynamic evolution of an open system far from equilibrium is based on the modern results of nonequilibrium statistical mechanics, the nonlocal theory of nonequilibrium transport developed by the author and the Speed Gradient principle introduced in the theory of adaptive control. Transition to a description of the system internal structure evolution at the mesoscopic level allows a new insight at the stability problem of non-equilibrium processes. The new model is used in a number of specific tasks.

  6. Histogram based analysis of lung perfusion of children after congenital diaphragmatic hernia repair.

    PubMed

    Kassner, Nora; Weis, Meike; Zahn, Katrin; Schaible, Thomas; Schoenberg, Stefan O; Schad, Lothar R; Zöllner, Frank G

    2018-05-01

    To investigate a histogram based approach to characterize the distribution of perfusion in the whole left and right lung by descriptive statistics and to show how histograms could be used to visually explore perfusion defects in two year old children after Congenital Diaphragmatic Hernia (CDH) repair. 28 children (age of 24.2±1.7months; all left sided hernia; 9 after extracorporeal membrane oxygenation therapy) underwent quantitative DCE-MRI of the lung. Segmentations of left and right lung were manually drawn to mask the calculated pulmonary blood flow maps and then to derive histograms for each lung side. Individual and group wise analysis of histograms of left and right lung was performed. Ipsilateral and contralateral lung show significant difference in shape and descriptive statistics derived from the histogram (Wilcoxon signed-rank test, p<0.05) on group wise and individual level. Subgroup analysis (patients with vs without ECMO therapy) showed no significant differences using histogram derived parameters. Histogram analysis can be a valuable tool to characterize and visualize whole lung perfusion of children after CDH repair. It allows for several possibilities to analyze the data, either describing the perfusion differences between the right and left lung but also to explore and visualize localized perfusion patterns in the 3D lung volume. Subgroup analysis will be possible given sufficient sample sizes. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Information properties of morphologically complex words modulate brain activity during word reading

    PubMed Central

    Hultén, Annika; Lehtonen, Minna; Lagus, Krista; Salmelin, Riitta

    2018-01-01

    Abstract Neuroimaging studies of the reading process point to functionally distinct stages in word recognition. Yet, current understanding of the operations linked to those various stages is mainly descriptive in nature. Approaches developed in the field of computational linguistics may offer a more quantitative approach for understanding brain dynamics. Our aim was to evaluate whether a statistical model of morphology, with well‐defined computational principles, can capture the neural dynamics of reading, using the concept of surprisal from information theory as the common measure. The Morfessor model, created for unsupervised discovery of morphemes, is based on the minimum description length principle and attempts to find optimal units of representation for complex words. In a word recognition task, we correlated brain responses to word surprisal values derived from Morfessor and from other psycholinguistic variables that have been linked with various levels of linguistic abstraction. The magnetoencephalography data analysis focused on spatially, temporally and functionally distinct components of cortical activation observed in reading tasks. The early occipital and occipito‐temporal responses were correlated with parameters relating to visual complexity and orthographic properties, whereas the later bilateral superior temporal activation was correlated with whole‐word based and morphological models. The results show that the word processing costs estimated by the statistical Morfessor model are relevant for brain dynamics of reading during late processing stages. PMID:29524274

  8. Identifying weaknesses in undergraduate programs within the context input process product model framework in view of faculty and library staff in 2014.

    PubMed

    Neyazi, Narges; Arab, Mohammad; Farzianpour, Freshteh; Mahmoudi, Mahmood

    2016-06-01

    Objective of this research is to find out weaknesses of undergraduate programs in terms of personnel and financial, organizational management and facilities in view of faculty and library staff, and determining factors that may facilitate program quality-improvement. This is a descriptive analytical survey research and from purpose aspect is an application evaluation study that undergraduate groups of selected faculties (Public Health, Nursing and Midwifery, Allied Medical Sciences and Rehabilitation) at Tehran University of Medical Sciences (TUMS) have been surveyed using context input process product model in 2014. Statistical population were consist of three subgroups including department head (n=10), faculty members (n=61), and library staff (n=10) with total population of 81 people. Data collected through three researcher-made questionnaires which were based on Likert scale. The data were then analyzed using descriptive and inferential statistics. Results showed desirable and relatively desirable situation for factors in context, input, process, and product fields except for factors of administration and financial; and research and educational spaces and equipment which were in undesirable situation. Based on results, researcher highlighted weaknesses in the undergraduate programs of TUMS in terms of research and educational spaces and facilities, educational curriculum, administration and financial; and recommended some steps in terms of financial, organizational management and communication with graduates in order to improve the quality of this system.

  9. Information properties of morphologically complex words modulate brain activity during word reading.

    PubMed

    Hakala, Tero; Hultén, Annika; Lehtonen, Minna; Lagus, Krista; Salmelin, Riitta

    2018-06-01

    Neuroimaging studies of the reading process point to functionally distinct stages in word recognition. Yet, current understanding of the operations linked to those various stages is mainly descriptive in nature. Approaches developed in the field of computational linguistics may offer a more quantitative approach for understanding brain dynamics. Our aim was to evaluate whether a statistical model of morphology, with well-defined computational principles, can capture the neural dynamics of reading, using the concept of surprisal from information theory as the common measure. The Morfessor model, created for unsupervised discovery of morphemes, is based on the minimum description length principle and attempts to find optimal units of representation for complex words. In a word recognition task, we correlated brain responses to word surprisal values derived from Morfessor and from other psycholinguistic variables that have been linked with various levels of linguistic abstraction. The magnetoencephalography data analysis focused on spatially, temporally and functionally distinct components of cortical activation observed in reading tasks. The early occipital and occipito-temporal responses were correlated with parameters relating to visual complexity and orthographic properties, whereas the later bilateral superior temporal activation was correlated with whole-word based and morphological models. The results show that the word processing costs estimated by the statistical Morfessor model are relevant for brain dynamics of reading during late processing stages. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  10. Characterizing upper urinary tract dilation on ultrasound: a survey of North American pediatric radiologists' practices.

    PubMed

    Swenson, David W; Darge, Kassa; Ziniel, Sonja I; Chow, Jeanne S

    2015-04-01

    Radiologists commonly evaluate children first diagnosed with urinary tract dilation on prenatal ultrasound (US). To establish how North American pediatric radiologists define and report findings of urinary tract dilation on US. A web-based survey was sent to North American members of the Society for Pediatric Radiology (SPR) from January to February 2014. Reporting practices and interpretation of three image-based cases using free text were queried. Responses to close-ended questions were analyzed with descriptive statistics, while free-text responses to the three cases were categorized and analyzed as (1) using either descriptive terminology or an established numerical grading system and (2) as providing a quantitative term for the degree of dilation. Two hundred eighty-four pediatric radiologists answered the survey resulting in a response rate of 19.0%. There is a great variety in the terms used to describe urinary tract dilation with 66.2% using descriptive terminology, 35.6% using Society for Fetal Urology (SFU) grading system and 35.9% measuring anterior-posterior diameter (APD) of the renal pelvis. There is no consensus for a normal postnatal APD or the meaning of hydronephrosis. For the same images, descriptions vary widely in degree of severity ranging from normal to mild to severe. Similar variability exists among those using the SFU system. Ninety-seven percent say they believe a unified descriptive system would be helpful and 87.7% would use it if available. Pediatric radiologists do not have a standardized method for describing urinary tract dilation but have a great desire for such a system and would follow it if available.

  11. Organizational Commitment DEOCS 4.1 Construct Validity Summary

    DTIC Science & Technology

    2017-08-01

    commitment construct that targets more specifically on the workgroup frame of reference. Included is a review of the 4.0 description and items...followed by the proposed modifications to the factor. The DEOCS 4.0 description provided for organizational commitment is “members’ dedication to the...5) examining variance and descriptive statistics, and (6) selecting items that demonstrate the strongest scale properties. Table 1. DEOCS 4.0

  12. Precision Cosmology: The First Half Million Years

    NASA Astrophysics Data System (ADS)

    Jones, Bernard J. T.

    2017-06-01

    Cosmology seeks to characterise our Universe in terms of models based on well-understood and tested physics. Today we know our Universe with a precision that once would have been unthinkable. This book develops the entire mathematical, physical and statistical framework within which this has been achieved. It tells the story of how we arrive at our profound conclusions, starting from the early twentieth century and following developments up to the latest data analysis of big astronomical datasets. It provides an enlightening description of the mathematical, physical and statistical basis for understanding and interpreting the results of key space- and ground-based data. Subjects covered include general relativity, cosmological models, the inhomogeneous Universe, physics of the cosmic background radiation, and methods and results of data analysis. Extensive online supplementary notes, exercises, teaching materials, and exercises in Python make this the perfect companion for researchers, teachers and students in physics, mathematics, and astrophysics.

  13. A statistical method to estimate low-energy hadronic cross sections

    NASA Astrophysics Data System (ADS)

    Balassa, Gábor; Kovács, Péter; Wolf, György

    2018-02-01

    In this article we propose a model based on the Statistical Bootstrap approach to estimate the cross sections of different hadronic reactions up to a few GeV in c.m.s. energy. The method is based on the idea, when two particles collide a so-called fireball is formed, which after a short time period decays statistically into a specific final state. To calculate the probabilities we use a phase space description extended with quark combinatorial factors and the possibility of more than one fireball formation. In a few simple cases the probability of a specific final state can be calculated analytically, where we show that the model is able to reproduce the ratios of the considered cross sections. We also show that the model is able to describe proton-antiproton annihilation at rest. In the latter case we used a numerical method to calculate the more complicated final state probabilities. Additionally, we examined the formation of strange and charmed mesons as well, where we used existing data to fit the relevant model parameters.

  14. Elementary test for nonclassicality based on measurements of position and momentum

    NASA Astrophysics Data System (ADS)

    Fresta, Luca; Borregaard, Johannes; Sørensen, Anders S.

    2015-12-01

    We generalize a nonclassicality test described by Kot et al. [Phys. Rev. Lett. 108, 233601 (2012), 10.1103/PhysRevLett.108.233601], which can be used to rule out any classical description of a physical system. The test is based on measurements of quadrature operators and works by proving a contradiction with the classical description in terms of a probability distribution in phase space. As opposed to the previous work, we generalize the test to include states without rotational symmetry in phase space. Furthermore, we compare the performance of the nonclassicality test with classical tomography methods based on the inverse Radon transform, which can also be used to establish the quantum nature of a physical system. In particular, we consider a nonclassicality test based on the so-called filtered back-projection formula. We show that the general nonclassicality test is conceptually simpler, requires less assumptions on the system, and is statistically more reliable than the tests based on the filtered back-projection formula. As a specific example, we derive the optimal test for quadrature squeezed single-photon states and show that the efficiency of the test does not change with the degree of squeezing.

  15. The cost of an emergency department visit and its relationship to emergency department volume.

    PubMed

    Bamezai, Anil; Melnick, Glenn; Nawathe, Amar

    2005-05-01

    This article addresses 2 questions: (1) to what extent do emergency departments (EDs) exhibit economies of scale; and (2) to what extent do publicly available accounting data understate the marginal cost of an outpatient ED visit? Understanding the appropriate role for EDs in the overall health care system is crucially dependent on answers to these questions. The literature on these issues is sparse and somewhat dated and fails to differentiate between trauma and nontrauma hospitals. We believe a careful review of these questions is necessary because several changes (greater managed care penetration, increased price competition, cost of compliance with Emergency Medical Treatment and Active Labor Act regulations, and so on) may have significantly altered ED economics in recent years. We use a 2-pronged approach, 1 based on descriptive analyses of publicly available accounting data and 1 based on statistical cost models estimated from a 9-year panel of hospital data, to address the above-mentioned questions. Neither the descriptive analyses nor the statistical models support the existence of significant scale economies. Furthermore, the marginal cost of outpatient ED visits, even without the emergency physician component, appear quite high--in 1998 dollars, US295 dollars and US412 dollars for nontrauma and trauma EDs, respectively. These statistical estimates exceed the accounting estimates of per-visit costs by a factor of roughly 2. Our findings suggest that the marginal cost of an outpatient ED visit is higher than is generally believed. Hospitals thus need to carefully review how EDs fit within their overall operations and cost structure and may need to pay special attention to policies and procedures that guide the delivery of nonurgent care through the ED.

  16. [Descriptive analysis of work and trends in anaesthesiology from 2005 to 2006: quantitative and qualitative aspects of effects and evaluation of anaesthesia].

    PubMed

    Majstorović, Branislava M; Simić, Snezana; Milaković, Branko D; Vucović, Dragan S; Aleksić, Valentina V

    2010-01-01

    In anaesthesiology, economic aspects have been insufficiently studied. The aim of this paper was the assessment of rational choice of the anaesthesiological services based on the analysis of the scope, distribution, trend and cost. The costs of anaesthesiological services were counted based on "unit" prices from the Republic Health Insurance Fund. Data were analysed by methods of descriptive statistics and statistical significance was tested by Student's t-test and chi2-test. The number of general anaesthesia was higher and average time of general anaesthesia was shorter, without statistical significance (t-test, p = 0.436) during 2006 compared to the previous year. Local anaesthesia was significantly higher (chi2-test, p = 0.001) in relation to planned operation in emergency surgery. The analysis of total anaesthesiological procedures revealed that a number of procedures significantly increased in ENT and MFH surgery, and ophthalmology, while some reduction was observed in general surgery, orthopaedics and trauma surgery and cardiovascular surgery (chi2-test, p = 0.000). The number of analgesia was higher than other procedures (chi2-test, p = 0.000). The structure of the cost was 24% in neurosurgery, 16% in digestive (general) surgery,14% in gynaecology and obstetrics, 13% in cardiovascular surgery and 9% in emergency room. Anaesthesiological services costs were the highest in neurosurgery, due to the length anaesthesia, and digestive surgery due to the total number of general anaesthesia performed. It is important to implement pharmacoeconomic studies in all departments, and to separate the anaesthesia services for emergency and planned operations. Disproportions between the number of anaesthesia, surgery interventions and the number of patients in surgical departments gives reason to design relation database.

  17. Data to inform a social media component for professional development and practices: A design-based research study.

    PubMed

    Novakovich, Jeanette; Shaw, Steven; Miah, Sophia

    2017-02-01

    This DIB article includes the course artefacts, instruments, survey data, and descriptive statistics, along with in-depth correlational analysis for the first iteration of a design-based research study on designing curriculum for developing online professional identity and social media practices for a multi-major advanced professional writing course. Raw data was entered into SPSS software. For interpretation and discussion, please see the original article entitled, "Designing curriculum to shape professional social media skills and identity in virtual communities of practice" (J. Novakovich, S. Miah, S. Shaw, 2017) [1].

  18. Long-term strategy for the statistical design of a forest health monitoring system

    Treesearch

    Hans T. Schreuder; Raymond L. Czaplewski

    1993-01-01

    A conceptual framework is given for a broad-scale survey of forest health that accomplishes three objectives: generate descriptive statistics; detect changes in such statistics; and simplify analytical inferences that identify, and possibly establish cause-effect relationships. Our paper discusses the development of sampling schemes to satisfy these three objectives,...

  19. Computers as an Instrument for Data Analysis. Technical Report No. 11.

    ERIC Educational Resources Information Center

    Muller, Mervin E.

    A review of statistical data analysis involving computers as a multi-dimensional problem provides the perspective for consideration of the use of computers in statistical analysis and the problems associated with large data files. An overall description of STATJOB, a particular system for doing statistical data analysis on a digital computer,…

  20. A Descriptive Study of Individual and Cross-Cultural Differences in Statistics Anxiety

    ERIC Educational Resources Information Center

    Baloglu, Mustafa; Deniz, M. Engin; Kesici, Sahin

    2011-01-01

    The present study investigated individual and cross-cultural differences in statistics anxiety among 223 Turkish and 237 American college students. A 2 x 2 between-subjects factorial multivariate analysis of covariance (MANCOVA) was performed on the six dependent variables which are the six subscales of the Statistical Anxiety Rating Scale.…

  1. Children in the UK: Signposts to Statistics.

    ERIC Educational Resources Information Center

    Grey, Eleanor

    This guide indicates statistical sources in the United Kingdom dealing with children and young people. Regular and occasional sources are listed in a three-column format including the name of the source, a brief description, and the geographic area to which statistics refer. Information is classified under 25 topic headings: abortions; accidents;…

  2. An analysis of the relationship of flight hours and naval rotary wing aviation mishaps

    DTIC Science & Technology

    2017-03-01

    evidence to support indicators used for sequestration, high flight hours, night flight, and overwater flight had statistically significant effects on...estimates found enough evidence to support indicators used for sequestration, high flight hours, night flight, and overwater flight had statistically ...38 C. DESCRIPTIVE STATISTICS ................................................................38 D

  3. Practicing Statistics by Creating Exercises for Fellow Students

    ERIC Educational Resources Information Center

    Bebermeier, Sarah; Reiss, Katharina

    2016-01-01

    This article outlines the execution of a workshop in which students were encouraged to actively review the course contents on descriptive statistics by creating exercises for their fellow students. In a first-year statistics course in psychology, 39 out of 155 students participated in the workshop. In a subsequent evaluation, the workshop was…

  4. Research design and statistical methods in Pakistan Journal of Medical Sciences (PJMS)

    PubMed Central

    Akhtar, Sohail; Shah, Syed Wadood Ali; Rafiq, M.; Khan, Ajmal

    2016-01-01

    Objective: This article compares the study design and statistical methods used in 2005, 2010 and 2015 of Pakistan Journal of Medical Sciences (PJMS). Methods: Only original articles of PJMS were considered for the analysis. The articles were carefully reviewed for statistical methods and designs, and then recorded accordingly. The frequency of each statistical method and research design was estimated and compared with previous years. Results: A total of 429 articles were evaluated (n=74 in 2005, n=179 in 2010, n=176 in 2015) in which 171 (40%) were cross-sectional and 116 (27%) were prospective study designs. A verity of statistical methods were found in the analysis. The most frequent methods include: descriptive statistics (n=315, 73.4%), chi-square/Fisher’s exact tests (n=205, 47.8%) and student t-test (n=186, 43.4%). There was a significant increase in the use of statistical methods over time period: t-test, chi-square/Fisher’s exact test, logistic regression, epidemiological statistics, and non-parametric tests. Conclusion: This study shows that a diverse variety of statistical methods have been used in the research articles of PJMS and frequency improved from 2005 to 2015. However, descriptive statistics was the most frequent method of statistical analysis in the published articles while cross-sectional study design was common study design. PMID:27022365

  5. Learning moment-based fast local binary descriptor

    NASA Astrophysics Data System (ADS)

    Bellarbi, Abdelkader; Zenati, Nadia; Otmane, Samir; Belghit, Hayet

    2017-03-01

    Recently, binary descriptors have attracted significant attention due to their speed and low memory consumption; however, using intensity differences to calculate the binary descriptive vector is not efficient enough. We propose an approach to binary description called POLAR_MOBIL, in which we perform binary tests between geometrical and statistical information using moments in the patch instead of the classical intensity binary test. In addition, we introduce a learning technique used to select an optimized set of binary tests with low correlation and high variance. This approach offers high distinctiveness against affine transformations and appearance changes. An extensive evaluation on well-known benchmark datasets reveals the robustness and the effectiveness of the proposed descriptor, as well as its good performance in terms of low computation complexity when compared with state-of-the-art real-time local descriptors.

  6. Algorithm for computing descriptive statistics for very large data sets and the exa-scale era

    NASA Astrophysics Data System (ADS)

    Beekman, Izaak

    2017-11-01

    An algorithm for Single-point, Parallel, Online, Converging Statistics (SPOCS) is presented. It is suited for in situ analysis that traditionally would be relegated to post-processing, and can be used to monitor the statistical convergence and estimate the error/residual in the quantity-useful for uncertainty quantification too. Today, data may be generated at an overwhelming rate by numerical simulations and proliferating sensing apparatuses in experiments and engineering applications. Monitoring descriptive statistics in real time lets costly computations and experiments be gracefully aborted if an error has occurred, and monitoring the level of statistical convergence allows them to be run for the shortest amount of time required to obtain good results. This algorithm extends work by Pébay (Sandia Report SAND2008-6212). Pébay's algorithms are recast into a converging delta formulation, with provably favorable properties. The mean, variance, covariances and arbitrary higher order statistical moments are computed in one pass. The algorithm is tested using Sillero, Jiménez, & Moser's (2013, 2014) publicly available UPM high Reynolds number turbulent boundary layer data set, demonstrating numerical robustness, efficiency and other favorable properties.

  7. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics

    PubMed Central

    2016-01-01

    Background We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. Objective To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. Methods The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Results Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. Conclusions IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise. PMID:27729304

  8. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics.

    PubMed

    Hoyt, Robert Eugene; Snider, Dallas; Thompson, Carla; Mantravadi, Sarita

    2016-10-11

    We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise.

  9. Large truck and bus crash facts, 2010.

    DOT National Transportation Integrated Search

    2012-09-01

    This annual edition of Large Truck and Bus Crash Facts contains descriptive statistics about fatal, injury, and : property damage only crashes involving large trucks and buses in 2010. Selected crash statistics on passenger : vehicles are also presen...

  10. Large truck and bus crash facts, 2007.

    DOT National Transportation Integrated Search

    2009-03-01

    This annual edition of Large Truck and Bus Crash Facts contains descriptive statistics about fatal, injury, and : property damage only crashes involving large trucks and buses in 2007. Selected crash statistics on passenger : vehicles are also presen...

  11. Large truck and bus crash facts, 2008. 

    DOT National Transportation Integrated Search

    2010-03-01

    This annual edition of Large Truck and Bus Crash Facts contains descriptive statistics about fatal, injury, and : property damage only crashes involving large trucks and buses in 2008. Selected crash statistics on passenger : vehicles are also presen...

  12. Large truck and bus crash facts, 2011.

    DOT National Transportation Integrated Search

    2013-10-01

    This annual edition of Large Truck and Bus Crash Facts contains descriptive statistics about fatal, injury, and : property damage only crashes involving large trucks and buses in 2011. Selected crash statistics on passenger : vehicles are also presen...

  13. Large truck and bus crash facts, 2013.

    DOT National Transportation Integrated Search

    2015-04-01

    This annual edition of Large Truck and Bus Crash Facts contains descriptive statistics about fatal, injury, and property damage only crashes involving large trucks and buses in 2013. Selected crash statistics on passenger vehicles are also presented ...

  14. Large truck and bus crash facts, 2009.

    DOT National Transportation Integrated Search

    2011-10-01

    This annual edition of Large Truck and Bus Crash Facts contains descriptive statistics about fatal, injury, and : property damage only crashes involving large trucks and buses in 2009. Selected crash statistics on passenger : vehicles are also presen...

  15. Large truck and bus crash facts, 2012.

    DOT National Transportation Integrated Search

    2014-06-01

    This annual edition of Large Truck and Bus Crash Facts contains descriptive statistics about fatal, injury, and property damage only crashes involving large trucks and buses in 2012. Selected crash statistics on passenger vehicles are also presented ...

  16. Realistic finite temperature simulations of magnetic systems using quantum statistics

    NASA Astrophysics Data System (ADS)

    Bergqvist, Lars; Bergman, Anders

    2018-01-01

    We have performed realistic atomistic simulations at finite temperatures using Monte Carlo and atomistic spin dynamics simulations incorporating quantum (Bose-Einstein) statistics. The description is much improved at low temperatures compared to classical (Boltzmann) statistics normally used in these kind of simulations, while at higher temperatures the classical statistics are recovered. This corrected low-temperature description is reflected in both magnetization and the magnetic specific heat, the latter allowing for improved modeling of the magnetic contribution to free energies. A central property in the method is the magnon density of states at finite temperatures, and we have compared several different implementations for obtaining it. The method has no restrictions regarding chemical and magnetic order of the considered materials. This is demonstrated by applying the method to elemental ferromagnetic systems, including Fe and Ni, as well as Fe-Co random alloys and the ferrimagnetic system GdFe3.

  17. [Statistical analysis of German radiologic periodicals: developmental trends in the last 10 years].

    PubMed

    Golder, W

    1999-09-01

    To identify which statistical tests are applied in German radiological publications, to what extent their use has changed during the last decade, and which factors might be responsible for this development. The major articles published in "ROFO" and "DER RADIOLOGE" during 1988, 1993 and 1998 were reviewed for statistical content. The contributions were classified by principal focus and radiological subspecialty. The methods used were assigned to descriptive, basal and advanced statistics. Sample size, significance level and power were established. The use of experts' assistance was monitored. Finally, we calculated the so-called cumulative accessibility of the publications. 525 contributions were found to be eligible. In 1988, 87% used descriptive statistics only, 12.5% basal, and 0.5% advanced statistics. The corresponding figures in 1993 and 1998 are 62 and 49%, 32 and 41%, and 6 and 10%, respectively. Statistical techniques were most likely to be used in research on musculoskeletal imaging and articles dedicated to MRI. Six basic categories of statistical methods account for the complete statistical analysis appearing in 90% of the articles. ROC analysis is the single most common advanced technique. Authors make increasingly use of statistical experts' opinion and programs. During the last decade, the use of statistical methods in German radiological journals has fundamentally improved, both quantitatively and qualitatively. Presently, advanced techniques account for 20% of the pertinent statistical tests. This development seems to be promoted by the increasing availability of statistical analysis software.

  18. Global Core Plasma Model

    NASA Technical Reports Server (NTRS)

    Gallagher, Dennis L.; Craven, Paul D.; Comfort, Richard H.

    1999-01-01

    Over 40 years of ground and spacecraft plasmaspheric measurements have resulted in many statistical descriptions of plasmaspheric properties. In some cases, these properties have been represented as analytical descriptions that are valid for specific regions or conditions. For the most part, what has not been done is to extend regional empirical descriptions or models to the plasmasphere as a whole. In contrast, many related investigations depend on the use of representative plasmaspheric conditions throughout the inner magnetosphere. Wave propagation, involving the transport of energy through the magnetosphere, is strongly affected by thermal plasma density and its composition. Ring current collisional and wave particle losses also strongly depend on these quantities. Plasmaspheric also plays a secondary role in influencing radio signals from the Global Positioning System satellites. The Global Core Plasma Model (GCPM) is an attempt to assimilate previous empirical evidence and regional models for plasmaspheric density into a continuous, smooth model of thermal plasma density in the inner magnetosphere. In that spirit, the International Reference Ionosphere is currently used to complete the low altitude description of density and composition in the model. The models and measurements on which the GCPM is currently based and its relationship to IRI will be discussed.

  19. 47 CFR 1.363 - Introduction of statistical data.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... case of sample surveys, there shall be a clear description of the survey design, including the... evidence in common carrier hearing proceedings, including but not limited to sample surveys, econometric... description of the experimental design shall be set forth, including a specification of the controlled...

  20. 47 CFR 1.363 - Introduction of statistical data.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... case of sample surveys, there shall be a clear description of the survey design, including the... evidence in common carrier hearing proceedings, including but not limited to sample surveys, econometric... description of the experimental design shall be set forth, including a specification of the controlled...

  1. 47 CFR 1.363 - Introduction of statistical data.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... case of sample surveys, there shall be a clear description of the survey design, including the... evidence in common carrier hearing proceedings, including but not limited to sample surveys, econometric... description of the experimental design shall be set forth, including a specification of the controlled...

  2. 47 CFR 1.363 - Introduction of statistical data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... case of sample surveys, there shall be a clear description of the survey design, including the... evidence in common carrier hearing proceedings, including but not limited to sample surveys, econometric... description of the experimental design shall be set forth, including a specification of the controlled...

  3. 47 CFR 1.363 - Introduction of statistical data.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... case of sample surveys, there shall be a clear description of the survey design, including the... evidence in common carrier hearing proceedings, including but not limited to sample surveys, econometric... description of the experimental design shall be set forth, including a specification of the controlled...

  4. Trends in statistical methods in articles published in Archives of Plastic Surgery between 2012 and 2017.

    PubMed

    Han, Kyunghwa; Jung, Inkyung

    2018-05-01

    This review article presents an assessment of trends in statistical methods and an evaluation of their appropriateness in articles published in the Archives of Plastic Surgery (APS) from 2012 to 2017. We reviewed 388 original articles published in APS between 2012 and 2017. We categorized the articles that used statistical methods according to the type of statistical method, the number of statistical methods, and the type of statistical software used. We checked whether there were errors in the description of statistical methods and results. A total of 230 articles (59.3%) published in APS between 2012 and 2017 used one or more statistical method. Within these articles, there were 261 applications of statistical methods with continuous or ordinal outcomes, and 139 applications of statistical methods with categorical outcome. The Pearson chi-square test (17.4%) and the Mann-Whitney U test (14.4%) were the most frequently used methods. Errors in describing statistical methods and results were found in 133 of the 230 articles (57.8%). Inadequate description of P-values was the most common error (39.1%). Among the 230 articles that used statistical methods, 71.7% provided details about the statistical software programs used for the analyses. SPSS was predominantly used in the articles that presented statistical analyses. We found that the use of statistical methods in APS has increased over the last 6 years. It seems that researchers have been paying more attention to the proper use of statistics in recent years. It is expected that these positive trends will continue in APS.

  5. Introduction of statistical information in a syntactic analyzer for document image recognition

    NASA Astrophysics Data System (ADS)

    Maroneze, André O.; Coüasnon, Bertrand; Lemaitre, Aurélie

    2011-01-01

    This paper presents an improvement to document layout analysis systems, offering a possible solution to Sayre's paradox (which states that an element "must be recognized before it can be segmented; and it must be segmented before it can be recognized"). This improvement, based on stochastic parsing, allows integration of statistical information, obtained from recognizers, during syntactic layout analysis. We present how this fusion of numeric and symbolic information in a feedback loop can be applied to syntactic methods to improve document description expressiveness. To limit combinatorial explosion during exploration of solutions, we devised an operator that allows optional activation of the stochastic parsing mechanism. Our evaluation on 1250 handwritten business letters shows this method allows the improvement of global recognition scores.

  6. Rogue waves in terms of multi-point statistics and nonequilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Hadjihosseini, Ali; Lind, Pedro; Mori, Nobuhito; Hoffmann, Norbert P.; Peinke, Joachim

    2017-04-01

    Ocean waves, which lead to rogue waves, are investigated on the background of complex systems. In contrast to deterministic approaches based on the nonlinear Schroedinger equation or focusing effects, we analyze this system in terms of a noisy stochastic system. In particular we present a statistical method that maps the complexity of multi-point data into the statistics of hierarchically ordered height increments for different time scales. We show that the stochastic cascade process with Markov properties is governed by a Fokker-Planck equation. Conditional probabilities as well as the Fokker-Planck equation itself can be estimated directly from the available observational data. This stochastic description enables us to show several new aspects of wave states. Surrogate data sets can in turn be generated allowing to work out different statistical features of the complex sea state in general and extreme rogue wave events in particular. The results also open up new perspectives for forecasting the occurrence probability of extreme rogue wave events, and even for forecasting the occurrence of individual rogue waves based on precursory dynamics. As a new outlook the ocean wave states will be considered in terms of nonequilibrium thermodynamics, for which the entropy production of different wave heights will be considered. We show evidence that rogue waves are characterized by negative entropy production. The statistics of the entropy production can be used to distinguish different wave states.

  7. Traditional Practices of Mothers in the Postpartum Period: Evidence from Turkey.

    PubMed

    Altuntuğ, Kamile; Anık, Yeşim; Ege, Emel

    2018-03-01

    In various cultures, the postpartum period is a sensitive time and various traditional practices are applied to protect the health of the mother and the baby. The aim of this study was to determine traditional practices of mother care in the postpartum period in Konya City of Turkey. The research was a descriptive, cross-sectional study carried out among 291 women at the first 8 weeks of postpartum period who visited to family health centers from June 1 to December 1, 2015. The data were collected using questionnaires. Statistical analysis of the data was done with SSPS version 22.0. Descriptive statistics were used to analyze the data. Based on the results, 84.5% of women applied a traditional mother care practice during the postpartum period. The most popular, were practices for increasing of breast milk (97.9%), preventing incubus "albasması" (81.8%), getting rid of incubus (74.9%), and preventing postpartum bleeding (14.1%).The findings of the study show that traditional practices towards mother care in the period after birth are common. In order to provide better health services, it is important for health professionals to understand the traditional beliefs and practices of the individuals, families, and society that they serve.

  8. Gestational surrogacy: Viewpoint of Iranian infertile women.

    PubMed

    Rahmani, Azad; Sattarzadeh, Nilofar; Gholizadeh, Leila; Sheikhalipour, Zahra; Allahbakhshian, Atefeh; Hassankhani, Hadi

    2011-09-01

    Surrogacy is a popular form of assisted reproductive technology of which only gestational form is approved by most of the religious scholars in Iran. Little evidence exists about the Iranian infertile women's viewpoint regarding gestational surrogacy. To assess the viewpoint of Iranian infertile women toward gestational surrogacy. This descriptive study was conducted at the infertility clinic of Tabriz University of Medical Sciences, Iran. The study sample consisted of 238 infertile women who were selected using the eligible sampling method. Data were collected by using a researcher developed questionnaire that included 25 items based on a five-point Likert scale. Data analysis was conducted by SPSS statistical software using descriptive statistics. Viewpoint of 214 women (89.9%) was positive. 36 (15.1%) women considered gestational surrogacy against their religious beliefs; 170 women (71.4%) did not assume the commissioning couple as owners of the baby; 160 women (67.2%) said that children who were born through surrogacy would better not know about it; and 174 women (73.1%) believed that children born through surrogacy will face mental problems. Iranian infertile women have positive viewpoint regarding the surrogacy. However, to increase the acceptability of surrogacy among infertile women, further efforts are needed.

  9. A review of published analyses of case-cohort studies and recommendations for future reporting.

    PubMed

    Sharp, Stephen J; Poulaliou, Manon; Thompson, Simon G; White, Ian R; Wood, Angela M

    2014-01-01

    The case-cohort study design combines the advantages of a cohort study with the efficiency of a nested case-control study. However, unlike more standard observational study designs, there are currently no guidelines for reporting results from case-cohort studies. Our aim was to review recent practice in reporting these studies, and develop recommendations for the future. By searching papers published in 24 major medical and epidemiological journals between January 2010 and March 2013 using PubMed, Scopus and Web of Knowledge, we identified 32 papers reporting case-cohort studies. The median subcohort sampling fraction was 4.1% (interquartile range 3.7% to 9.1%). The papers varied in their approaches to describing the numbers of individuals in the original cohort and the subcohort, presenting descriptive data, and in the level of detail provided about the statistical methods used, so it was not always possible to be sure that appropriate analyses had been conducted. Based on the findings of our review, we make recommendations about reporting of the study design, subcohort definition, numbers of participants, descriptive information and statistical methods, which could be used alongside existing STROBE guidelines for reporting observational studies.

  10. Identification of phases, symmetries and defects through local crystallography

    DOE PAGES

    Belianinov, Alex; He, Qian; Kravchenko, Mikhail; ...

    2015-07-20

    Here we report that advances in electron and probe microscopies allow 10 pm or higher precision in measurements of atomic positions. This level of fidelity is sufficient to correlate the length (and hence energy) of bonds, as well as bond angles to functional properties of materials. Traditionally, this relied on mapping locally measured parameters to macroscopic variables, for example, average unit cell. This description effectively ignores the information contained in the microscopic degrees of freedom available in a high-resolution image. Here we introduce an approach for local analysis of material structure based on statistical analysis of individual atomic neighbourhoods. Clusteringmore » and multivariate algorithms such as principal component analysis explore the connectivity of lattice and bond structure, as well as identify minute structural distortions, thus allowing for chemical description and identification of phases. This analysis lays the framework for building image genomes and structure–property libraries, based on conjoining structural and spectral realms through local atomic behaviour.« less

  11. An Analysis of Research Methods and Statistical Techniques Used by Doctoral Dissertation at the Education Sciences in Turkey

    ERIC Educational Resources Information Center

    Karadag, Engin

    2010-01-01

    To assess research methods and analysis of statistical techniques employed by educational researchers, this study surveyed unpublished doctoral dissertation from 2003 to 2007. Frequently used research methods consisted of experimental research; a survey; a correlational study; and a case study. Descriptive statistics, t-test, ANOVA, factor…

  12. Risk Factors for Sexual Violence in the Military: An Analysis of Sexual Assault and Sexual Harassment Incidents and Reporting

    DTIC Science & Technology

    2017-03-01

    53 ix LIST OF TABLES Table 1. Descriptive Statistics for Control Variables by... Statistics for Control Variables by Gender (Random Subsample with Complete Survey) ............................................................30 Table...empirical analysis. Chapter IV describes the summary statistics and results. Finally, Chapter V offers concluding thoughts, study limitations, and

  13. What We Know about Community College Low-Income and Minority Student Outcomes: Descriptive Statistics from National Surveys

    ERIC Educational Resources Information Center

    Bailey, Thomas; Jenkins, Davis; Leinbach, Timothy

    2005-01-01

    This report summarizes the latest available national statistics on access and attainment by low income and minority community college students. The data come from the National Center for Education Statistics' (NCES) Integrated Postsecondary Education Data System (IPEDS) annual surveys of all postsecondary educational institutions and the NCES…

  14. A First Assignment to Create Student Buy-In in an Introductory Business Statistics Course

    ERIC Educational Resources Information Center

    Newfeld, Daria

    2016-01-01

    This paper presents a sample assignment to be administered after the first two weeks of an introductory business focused statistics course in order to promote student buy-in. This assignment integrates graphical displays of data, descriptive statistics and cross-tabulation analysis through the lens of a marketing analysis study. A marketing sample…

  15. Regional analyses of labor markets and demography: a model based Norwegian example.

    PubMed

    Stambol, L S; Stolen, N M; Avitsland, T

    1998-01-01

    The authors discuss the regional REGARD model, developed by Statistics Norway to analyze the regional implications of macroeconomic development of employment, labor force, and unemployment. "In building the model, empirical analyses of regional producer behavior in manufacturing industries have been performed, and the relation between labor market development and regional migration has been investigated. Apart from providing a short description of the REGARD model, this article demonstrates the functioning of the model, and presents some results of an application." excerpt

  16. Modeling of Pedestrian Flows Using Hybrid Models of Euler Equations and Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Bärwolff, Günter; Slawig, Thomas; Schwandt, Hartmut

    2007-09-01

    In the last years various systems have been developed for controlling, planning and predicting the traffic of persons and vehicles, in particular under security aspects. Going beyond pure counting and statistical models, approaches were found to be very adequate and accurate which are based on well-known concepts originally developed in very different research areas, namely continuum mechanics and computer science. In the present paper, we outline a continuum mechanical approach for the description of pedestrain flow.

  17. Identifying and Validating Requirements of a Mobile-Based Self-Management System for People Living with HIV.

    PubMed

    Mehraeen, Esmaeil; Safdari, Reza; Seyedalinaghi, Seyed Ahmad; Mohammadzadeh, Niloofar; Arji, Goli

    2018-01-01

    Due to the widespread use of mobile technology and the low cost of this technology, implementing a mobile-based self-management system can lead to adherence to the medication regimens and promotion of the health of people living with HIV (PLWH). We aimed to identify requirements of a mobile-based self-management system, and validate them from the perspective of infectious diseases specialists. This is a mixed-methods study that carried out in two main phases. In the first phase, we identified requirements of a mobile-based self-management system for PLWH. In the second phase, identified requirements were validated using a researcher made questionnaire. The statistical population was infectious diseases specialists affiliated to Tehran University of Medical Sciences. The collected data were analyzed using SPSS statistical software (version 19), and descriptive statistics. By full-text review of selected studies, we determined requirements of a mobile-based self-management system in four categories: demographic, clinical, strategically and technical capabilities. According to the findings, 6 data elements for demographic category, 11 data elements for clinical category, 10 items for self-management strategies, and 11 features for technical capabilities were selected. Using the identified preferences, it is possible to design and implement a mobile-based self-management system for HIV-positive people. Developing a mobile-based self-management system is expected to progress the skills of self-management PLWH, improve of medication regimen adherence, and facilitate communication with healthcare providers.

  18. Methodological quality and descriptive characteristics of prosthodontic-related systematic reviews.

    PubMed

    Aziz, T; Compton, S; Nassar, U; Matthews, D; Ansari, K; Flores-Mir, C

    2013-04-01

    Ideally, healthcare systematic reviews (SRs) should be beneficial to practicing professionals in making evidence-based clinical decisions. However, the conclusions drawn from SRs are directly related to the quality of the SR and of the included studies. The aim was to investigate the methodological quality and key descriptive characteristics of SRs published in prosthodontics. Methodological quality was analysed using the Assessment of Multiple Reviews (AMSTAR) tool. Several electronic resources (MEDLINE, EMBASE, Web of Science and American Dental Association's Evidence-based Dentistry website) were searched. In total 106 SRs were located. Key descriptive characteristics and methodological quality features were gathered and assessed, and descriptive and inferential statistical testing performed. Most SRs in this sample originated from the European continent followed by North America. Two to five authors conducted most SRs; the majority was affiliated with academic institutions and had prior experience publishing SRs. The majority of SRs were published in specialty dentistry journals, with implant or implant-related topics, the primary topics of interest for most. According to AMSTAR, most quality aspects were adequately fulfilled by less than half of the reviews. Publication bias and grey literature searches were the most poorly adhered components. Overall, the methodological quality of the prosthodontic-related systematic was deemed limited. Future recommendations would include authors to have prior training in conducting SRs and for journals to include a universal checklist that should be adhered to address all key characteristics of an unbiased SR process. © 2013 Blackwell Publishing Ltd.

  19. Interdisciplinary evaluation of dysphagia: clinical swallowing evaluation and videoendoscopy of swallowing.

    PubMed

    Sordi, Marina de; Mourão, Lucia Figueiredo; Silva, Ariovaldo Armando da; Flosi, Luciana Claudia Leite

    2009-01-01

    Patients with dysphagia have impairments in many aspects, and an interdisciplinary approach is fundamental to define diagnosis and treatment. A joint approach in the clinical and videoendoscopy evaluation is paramount. To study the correlation between the clinical assessment (ACD) and the videoendoscopic (VED) assessment of swallowing by classifying the degree of severity and the qualitative/descriptive analyses of the procedures. cross-sectional, descriptive and comparative. held from March to December of 2006, at the Otolaryngology/Dysphagia ward of a hospital in the country side of São Paulo. 30 dysphagic patients with different disorders were assessed by ACD and VED. The data was classified by means of severity scales and qualitative/ descriptive analysis. the correlation between severity ACD and VED scales pointed to a statistically significant low agreement (KAPA = 0.4) (p=0,006). The correlation between the qualitative/descriptive analysis pointed to an excellent and statistically significant agreement (KAPA=0.962) (p<0.001) concerning the entire sample. the low agreement between the severity scales point to a need to perform both procedures, reinforcing VED as a doable procedure. The descriptive qualitative analysis pointed to an excellent agreement, and such data reinforces our need to understand swallowing as a process.

  20. R and Spatial Data

    EPA Science Inventory

    R is an open source language and environment for statistical computing and graphics that can also be used for both spatial analysis (i.e. geoprocessing and mapping of different types of spatial data) and spatial data analysis (i.e. the application of statistical descriptions and ...

  1. WASP (Write a Scientific Paper) using Excel - 2: Pivot tables.

    PubMed

    Grech, Victor

    2018-02-01

    Data analysis at the descriptive stage and the eventual presentation of results requires the tabulation and summarisation of data. This exercise should always precede inferential statistics. Pivot tables and pivot charts are one of Excel's most powerful and underutilised features, with tabulation functions that immensely facilitate descriptive statistics. Pivot tables permit users to dynamically summarise and cross-tabulate data, create tables in several dimensions, offer a range of summary statistics and can be modified interactively with instant outputs. Large and detailed datasets are thereby easily manipulated making pivot tables arguably the best way to explore, summarise and present data from many different angles. This second paper in the WASP series in Early Human Development provides pointers for pivot table manipulation in Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Predictors of Errors of Novice Java Programmers

    ERIC Educational Resources Information Center

    Bringula, Rex P.; Manabat, Geecee Maybelline A.; Tolentino, Miguel Angelo A.; Torres, Edmon L.

    2012-01-01

    This descriptive study determined which of the sources of errors would predict the errors committed by novice Java programmers. Descriptive statistics revealed that the respondents perceived that they committed the identified eighteen errors infrequently. Thought error was perceived to be the main source of error during the laboratory programming…

  3. The Status of Child Nutrition Programs in Colorado.

    ERIC Educational Resources Information Center

    McMillan, Daniel C.; Vigil, Herminia J.

    This report provides descriptive and statistical data on the status of child nutrition programs in Colorado. The report contains descriptions of the National School Lunch Program, school breakfast programs, the Special Milk Program, the Summer Food Service Program, the Nutrition Education and Training Program, state dietary guidelines, Colorado…

  4. Statistical correlation of structural mode shapes from test measurements and NASTRAN analytical values

    NASA Technical Reports Server (NTRS)

    Purves, L.; Strang, R. F.; Dube, M. P.; Alea, P.; Ferragut, N.; Hershfeld, D.

    1983-01-01

    The software and procedures of a system of programs used to generate a report of the statistical correlation between NASTRAN modal analysis results and physical tests results from modal surveys are described. Topics discussed include: a mathematical description of statistical correlation, a user's guide for generating a statistical correlation report, a programmer's guide describing the organization and functions of individual programs leading to a statistical correlation report, and a set of examples including complete listings of programs, and input and output data.

  5. Mathematics of Sensing, Exploitation, and Execution (MSEE) Hierarchical Representations for the Evaluation of Sensed Data

    DTIC Science & Technology

    2016-06-01

    theories of the mammalian visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown...test, computer vision, semantic description , street scenes, belief propagation, generative models, nonlinear filtering, sufficient statistics 16...visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown team was on single images

  6. Statistical Analysis of Research Data | Center for Cancer Research

    Cancer.gov

    Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data. The Statistical Analysis of Research Data (SARD) course will be held on April 5-6, 2018 from 9 a.m.-5 p.m. at the National Institutes of Health's Natcher Conference Center, Balcony C on the Bethesda Campus. SARD is designed to provide an overview on the general principles of statistical analysis of research data.  The first day will feature univariate data analysis, including descriptive statistics, probability distributions, one- and two-sample inferential statistics.

  7. [Application of statistics on chronic-diseases-relating observational research papers].

    PubMed

    Hong, Zhi-heng; Wang, Ping; Cao, Wei-hua

    2012-09-01

    To study the application of statistics on Chronic-diseases-relating observational research papers which were recently published in the Chinese Medical Association Magazines, with influential index above 0.5. Using a self-developed criterion, two investigators individually participated in assessing the application of statistics on Chinese Medical Association Magazines, with influential index above 0.5. Different opinions reached an agreement through discussion. A total number of 352 papers from 6 magazines, including the Chinese Journal of Epidemiology, Chinese Journal of Oncology, Chinese Journal of Preventive Medicine, Chinese Journal of Cardiology, Chinese Journal of Internal Medicine and Chinese Journal of Endocrinology and Metabolism, were reviewed. The rate of clear statement on the following contents as: research objectives, t target audience, sample issues, objective inclusion criteria and variable definitions were 99.43%, 98.57%, 95.43%, 92.86% and 96.87%. The correct rates of description on quantitative and qualitative data were 90.94% and 91.46%, respectively. The rates on correctly expressing the results, on statistical inference methods related to quantitative, qualitative data and modeling were 100%, 95.32% and 87.19%, respectively. 89.49% of the conclusions could directly response to the research objectives. However, 69.60% of the papers did not mention the exact names of the study design, statistically, that the papers were using. 11.14% of the papers were in lack of further statement on the exclusion criteria. Percentage of the papers that could clearly explain the sample size estimation only taking up as 5.16%. Only 24.21% of the papers clearly described the variable value assignment. Regarding the introduction on statistical conduction and on database methods, the rate was only 24.15%. 18.75% of the papers did not express the statistical inference methods sufficiently. A quarter of the papers did not use 'standardization' appropriately. As for the aspect of statistical inference, the rate of description on statistical testing prerequisite was only 24.12% while 9.94% papers did not even employ the statistical inferential method that should be used. The main deficiencies on the application of Statistics used in papers related to Chronic-diseases-related observational research were as follows: lack of sample-size determination, variable value assignment description not sufficient, methods on statistics were not introduced clearly or properly, lack of consideration for pre-requisition regarding the use of statistical inferences.

  8. Environmental statistics with S-Plus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Millard, S.P.; Neerchal, N.K.

    1999-12-01

    The combination of easy-to-use software with easy access to a description of the statistical methods (definitions, concepts, etc.) makes this book an excellent resource. One of the major features of this book is the inclusion of general information on environmental statistical methods and examples of how to implement these methods using the statistical software package S-Plus and the add-in modules Environmental-Stats for S-Plus, S+SpatialStats, and S-Plus for ArcView.

  9. Impact of isotropic constitutive descriptions on the predicted peak wall stress in abdominal aortic aneurysms.

    PubMed

    Man, V; Polzer, S; Gasser, T C; Novotny, T; Bursa, J

    2018-03-01

    Biomechanics-based assessment of Abdominal Aortic Aneurysm (AAA) rupture risk has gained considerable scientific and clinical momentum. However, computation of peak wall stress (PWS) using state-of-the-art finite element models is time demanding. This study investigates which features of the constitutive description of AAA wall are decisive for achieving acceptable stress predictions in it. Influence of five different isotropic constitutive descriptions of AAA wall is tested; models reflect realistic non-linear, artificially stiff non-linear, or artificially stiff pseudo-linear constitutive descriptions of AAA wall. Influence of the AAA wall model is tested on idealized (n=4) and patient-specific (n=16) AAA geometries. Wall stress computations consider a (hypothetical) load-free configuration and include residual stresses homogenizing the stresses across the wall. Wall stress differences amongst the different descriptions were statistically analyzed. When the qualitatively similar non-linear response of the AAA wall with low initial stiffness and subsequent strain stiffening was taken into consideration, wall stress (and PWS) predictions did not change significantly. Keeping this non-linear feature when using an artificially stiff wall can save up to 30% of the computational time, without significant change in PWS. In contrast, a stiff pseudo-linear elastic model may underestimate the PWS and is not reliable for AAA wall stress computations. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.

  10. Statistical Package User’s Guide.

    DTIC Science & Technology

    1980-08-01

    261 C. STACH Nonparametric Descriptive Statistics ... ......... ... 265 D. CHIRA Coefficient of Concordance...135 I.- -a - - W 7- Test Data: This program was tested using data from John Neter and William Wasserman, Applied Linear Statistical Models: Regression...length of data file e. new fileý name (not same as raw data file) 5. Printout as optioned for only. Comments: Ranked data are used for program CHIRA

  11. K-means cluster analysis of tourist destination in special region of Yogyakarta using spatial approach and social network analysis (a case study: post of @explorejogja instagram account in 2016)

    NASA Astrophysics Data System (ADS)

    Iswandhani, N.; Muhajir, M.

    2018-03-01

    This research was conducted in Department of Statistics Islamic University of Indonesia. The data used are primary data obtained by post @explorejogja instagram account from January until December 2016. In the @explorejogja instagram account found many tourist destinations that can be visited by tourists both in the country and abroad, Therefore it is necessary to form a cluster of existing tourist destinations based on the number of likes from user instagram assumed as the most popular. The purpose of this research is to know the most popular distribution of tourist spot, the cluster formation of tourist destinations, and central popularity of tourist destinations based on @explorejogja instagram account in 2016. Statistical analysis used is descriptive statistics, k-means clustering, and social network analysis. The results of this research were obtained the top 10 most popular destinations in Yogyakarta, map of html-based tourist destination distribution consisting of 121 tourist destination points, formed 3 clusters each consisting of cluster 1 with 52 destinations, cluster 2 with 9 destinations and cluster 3 with 60 destinations, and Central popularity of tourist destinations in the special region of Yogyakarta by district.

  12. Anger and depression levels of mothers with premature infants in the neonatal intensive care unit.

    PubMed

    Kardaşözdemir, Funda; AKGüN Şahin, Zümrüt

    2016-02-04

    The aim of this study was to examine anger and depression levels of mothers who had a premature infant in the NICU, and all factors affecting the situation. This descriptive study was performed in the level I and II units of NICU at three state hospitals in Turkey. The data was collected with a demographic questionnaire, "Beck Depression Inventory" and "Anger Expression Scale". Descriptive statistics, parametric and nonparametric statistical tests and Pearson correlation were used in the data analysis. Mothers whose infants are under care in NICU have moderate depression. It has also been determined that mothers' educational level, income level and gender of infants were statistically significant (p <0.05). A positive relationship between depression and trait anger scores was found to be statistically significant. A negative relationship existed between depression and anger-control scores for the mothers, which was statistically significant (p <0.05). Due to the results of research, recommended that mothers who are at risk of depression and anger in the NICU evaluated by nurses and these nurses to develop their consulting roles.

  13. Statistics of high-level scene context.

    PubMed

    Greene, Michelle R

    2013-01-01

    CONTEXT IS CRITICAL FOR RECOGNIZING ENVIRONMENTS AND FOR SEARCHING FOR OBJECTS WITHIN THEM: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed "things" in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics rather than intuition.

  14. Over ten thousand cases and counting: acidbase.org is serving the critical care community.

    PubMed

    Elbers, Paul W G; Van Regenmortel, Niels; Gatz, Rainer

    2015-01-01

    Acidbase.org has been serving the critical care community for over a decade. The backbone of this online resource consists of Peter Stewart's original text "How to understand Acid-Base" which is freely available to everyone. In addition, Stewart's Textbook of Acid Base, which puts the theory in today's clinical context is available for purchase from the website. However, many intensivists use acidbase.org on a daily basis for its educational content and in particular for its analysis module. This review provides an overview of the history of the website, a tutorial and descriptive statistics of over 10,000 queries submitted to the analysis module.

  15. Automatic Classification of Medical Text: The Influence of Publication Form1

    PubMed Central

    Cole, William G.; Michael, Patricia A.; Stewart, James G.; Blois, Marsden S.

    1988-01-01

    Previous research has shown that within the domain of medical journal abstracts the statistical distribution of words is neither random nor uniform, but is highly characteristic. Many words are used mainly or solely by one medical specialty or when writing about one particular level of description. Due to this regularity of usage, automatic classification within journal abstracts has proved quite successful. The present research asks two further questions. It investigates whether this statistical regularity and automatic classification success can also be achieved in medical textbook chapters. It then goes on to see whether the statistical distribution found in textbooks is sufficiently similar to that found in abstracts to permit accurate classification of abstracts based solely on previous knowledge of textbooks. 14 textbook chapters and 45 MEDLINE abstracts were submitted to an automatic classification program that had been trained only on chapters drawn from a standard textbook series. Statistical analysis of the properties of abstracts vs. chapters revealed important differences in word use. Automatic classification performance was good for chapters, but poor for abstracts.

  16. Development and evaluation of statistical shape modeling for principal inner organs on torso CT images.

    PubMed

    Zhou, Xiangrong; Xu, Rui; Hara, Takeshi; Hirano, Yasushi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Kido, Shoji; Fujita, Hiroshi

    2014-07-01

    The shapes of the inner organs are important information for medical image analysis. Statistical shape modeling provides a way of quantifying and measuring shape variations of the inner organs in different patients. In this study, we developed a universal scheme that can be used for building the statistical shape models for different inner organs efficiently. This scheme combines the traditional point distribution modeling with a group-wise optimization method based on a measure called minimum description length to provide a practical means for 3D organ shape modeling. In experiments, the proposed scheme was applied to the building of five statistical shape models for hearts, livers, spleens, and right and left kidneys by use of 50 cases of 3D torso CT images. The performance of these models was evaluated by three measures: model compactness, model generalization, and model specificity. The experimental results showed that the constructed shape models have good "compactness" and satisfied the "generalization" performance for different organ shape representations; however, the "specificity" of these models should be improved in the future.

  17. High-resolution image reconstruction technique applied to the optical testing of ground-based astronomical telescopes

    NASA Astrophysics Data System (ADS)

    Jin, Zhenyu; Lin, Jing; Liu, Zhong

    2008-07-01

    By study of the classical testing techniques (such as Shack-Hartmann Wave-front Sensor) adopted in testing the aberration of ground-based astronomical optical telescopes, we bring forward two testing methods on the foundation of high-resolution image reconstruction technology. One is based on the averaged short-exposure OTF and the other is based on the Speckle Interferometric OTF by Antoine Labeyrie. Researches made by J.Ohtsubo, F. Roddier, Richard Barakat and J.-Y. ZHANG indicated that the SITF statistical results would be affected by the telescope optical aberrations, which means the SITF statistical results is a function of optical system aberration and the atmospheric Fried parameter (seeing). Telescope diffraction-limited information can be got through two statistics methods of abundant speckle images: by the first method, we can extract the low frequency information such as the full width at half maximum (FWHM) of the telescope PSF to estimate the optical quality; by the second method, we can get a more precise description of the telescope PSF with high frequency information. We will apply the two testing methods to the 2.4m optical telescope of the GMG Observatory, in china to validate their repeatability and correctness and compare the testing results with that of the Shack-Hartmann Wave-Front Sensor got. This part will be described in detail in our paper.

  18. A study on phenomenology of Dhat syndrome in men in a general medical setting.

    PubMed

    Prakash, Sathya; Sharan, Pratap; Sood, Mamta

    2016-01-01

    "Dhat syndrome" is believed to be a culture-bound syndrome of the Indian subcontinent. Although many studies have been performed, many have methodological limitations and there is a lack of agreement in many areas. The aim is to study the phenomenology of "Dhat syndrome" in men and to explore the possibility of subtypes within this entity. It is a cross-sectional descriptive study conducted at a sex and marriage counseling clinic of a tertiary care teaching hospital in Northern India. An operational definition and assessment instrument for "Dhat syndrome" was developed after taking all concerned stakeholders into account and review of literature. It was applied on 100 patients along with socio-demographic profile, Hamilton Depression Rating Scale, Hamilton Anxiety Rating Scale, Mini International Neuropsychiatric Interview, and Postgraduate Institute Neuroticism Scale. For statistical analysis, descriptive statistics, group comparisons, and Pearson's product moment correlations were carried out. Factor analysis and cluster analysis were done to determine the factor structure and subtypes of "Dhat syndrome." A diagnostic and assessment instrument for "Dhat syndrome" has been developed and the phenomenology in 100 patients has been described. Both the health beliefs scale and associated symptoms scale demonstrated a three-factor structure. The patients with "Dhat syndrome" could be categorized into three clusters based on severity. There appears to be a significant agreement among various stakeholders on the phenomenology of "Dhat syndrome" although some differences exist. "Dhat syndrome" could be subtyped into three clusters based on severity.

  19. Response to traumatic brain injury neurorehabilitation through an artificial intelligence and statistics hybrid knowledge discovery from databases methodology.

    PubMed

    Gibert, Karina; García-Rudolph, Alejandro; García-Molina, Alberto; Roig-Rovira, Teresa; Bernabeu, Montse; Tormos, José María

    2008-01-01

    Develop a classificatory tool to identify different populations of patients with Traumatic Brain Injury based on the characteristics of deficit and response to treatment. A KDD framework where first, descriptive statistics of every variable was done, data cleaning and selection of relevant variables. Then data was mined using a generalization of Clustering based on rules (CIBR), an hybrid AI and Statistics technique which combines inductive learning (AI) and clustering (Statistics). A prior Knowledge Base (KB) is considered to properly bias the clustering; semantic constraints implied by the KB hold in final clusters, guaranteeing interpretability of the resultis. A generalization (Exogenous Clustering based on rules, ECIBR) is presented, allowing to define the KB in terms of variables which will not be considered in the clustering process itself, to get more flexibility. Several tools as Class panel graph are introduced in the methodology to assist final interpretation. A set of 5 classes was recommended by the system and interpretation permitted profiles labeling. From the medical point of view, composition of classes is well corresponding with different patterns of increasing level of response to rehabilitation treatments. All the patients initially assessable conform a single group. Severe impaired patients are subdivided in four profiles which clearly distinct response patterns. Particularly interesting the partial response profile, where patients could not improve executive functions. Meaningful classes were obtained and, from a semantics point of view, the results were sensibly improved regarding classical clustering, according to our opinion that hybrid AI & Stats techniques are more powerful for KDD than pure ones.

  20. Fourier descriptor analysis and unification of voice range profile contours: method and applications.

    PubMed

    Pabon, Peter; Ternström, Sten; Lamarche, Anick

    2011-06-01

    To describe a method for unified description, statistical modeling, and comparison of voice range profile (VRP) contours, even from diverse sources. A morphologic modeling technique, which is based on Fourier descriptors (FDs), is applied to the VRP contour. The technique, which essentially involves resampling of the curve of the contour, is assessed and also is compared to density-based VRP averaging methods that use the overlap count. VRP contours can be usefully described and compared using FDs. The method also permits the visualization of the local covariation along the contour average. For example, the FD-based analysis shows that the population variance for ensembles of VRP contours is usually smallest at the upper left part of the VRP. To illustrate the method's advantages and possible further application, graphs are given that compare the averaged contours from different authors and recording devices--for normal, trained, and untrained male and female voices as well as for child voices. The proposed technique allows any VRP shape to be brought to the same uniform base. On this uniform base, VRP contours or contour elements coming from a variety of sources may be placed within the same graph for comparison and for statistical analysis.

  1. Using Carbon Emissions Data to "Heat Up" Descriptive Statistics

    ERIC Educational Resources Information Center

    Brooks, Robert

    2012-01-01

    This article illustrates using carbon emissions data in an introductory statistics assignment. The carbon emissions data has desirable characteristics including: choice of measure; skewness; and outliers. These complexities allow research and public policy debate to be introduced. (Contains 4 figures and 2 tables.)

  2. Statistical mechanics of economics I

    NASA Astrophysics Data System (ADS)

    Kusmartsev, F. V.

    2011-02-01

    We show that statistical mechanics is useful in the description of financial crisis and economics. Taking a large amount of instant snapshots of a market over an interval of time we construct their ensembles and study their statistical interference. This results in a probability description of the market and gives capital, money, income, wealth and debt distributions, which in the most cases takes the form of the Bose-Einstein distribution. In addition, statistical mechanics provides the main market equations and laws which govern the correlations between the amount of money, debt, product, prices and number of retailers. We applied the found relations to a study of the evolution of the economics in USA between the years 1996 to 2008 and observe that over that time the income of a major population is well described by the Bose-Einstein distribution which parameters are different for each year. Each financial crisis corresponds to a peak in the absolute activity coefficient. The analysis correctly indicates the past crises and predicts the future one.

  3. Multivariate assessment of event-related potentials with the t-CWT method.

    PubMed

    Bostanov, Vladimir

    2015-11-05

    Event-related brain potentials (ERPs) are usually assessed with univariate statistical tests although they are essentially multivariate objects. Brain-computer interface applications are a notable exception to this practice, because they are based on multivariate classification of single-trial ERPs. Multivariate ERP assessment can be facilitated by feature extraction methods. One such method is t-CWT, a mathematical-statistical algorithm based on the continuous wavelet transform (CWT) and Student's t-test. This article begins with a geometric primer on some basic concepts of multivariate statistics as applied to ERP assessment in general and to the t-CWT method in particular. Further, it presents for the first time a detailed, step-by-step, formal mathematical description of the t-CWT algorithm. A new multivariate outlier rejection procedure based on principal component analysis in the frequency domain is presented as an important pre-processing step. The MATLAB and GNU Octave implementation of t-CWT is also made publicly available for the first time as free and open source code. The method is demonstrated on some example ERP data obtained in a passive oddball paradigm. Finally, some conceptually novel applications of the multivariate approach in general and of the t-CWT method in particular are suggested and discussed. Hopefully, the publication of both the t-CWT source code and its underlying mathematical algorithm along with a didactic geometric introduction to some basic concepts of multivariate statistics would make t-CWT more accessible to both users and developers in the field of neuroscience research.

  4. Do the adult criminal careers of African Americans fit the “facts”?

    PubMed Central

    Doherty, Elaine Eggleston; Ensminger, Margaret E.

    2014-01-01

    Purpose A major gap in the criminal career research is our understanding of offending among African Americans, especially beyond early adulthood. In light of this gap, this study describes the criminal career patterns of a cohort of African American males and females. Methods This paper uses official criminal history data spanning ages 17 to 52 from the Woodlawn Study, a community cohort of 1,242 urban African American males and females. We use basic descriptive statistics as well as group-based modeling to provide a detailed description of the various dimensions of their adult criminal careers. Results We find cumulative prevalence rates similar to those for African Americans from national probability sample estimates, yet participation in offending extends farther into midlife than expected with a substantial proportion of the cohort still engaged in offending into their 30s. Conclusions The descriptive analyses contribute to the larger body of knowledge regarding the relationship between age and crime and the unfolding of the criminal career for African American males and females. The applicability of existing life course and developmental theories is discussed in light of the findings. PMID:25605979

  5. Reliability and convergence of three concepts of narcissistic personality.

    PubMed

    Perry, J D; Perry, J C

    1996-01-01

    UNTIL recent years, the personality disorders have been relatively unexplored compared to other psychiatric diagnoses. Over 15 years ago, there was little agreement on the diagnosis of borderline personality disorder (Perry and Klerman 1978), but efforts to specify the constructs and respective criteria for the borderline diagnosis spurred a plethora of systematic research. The result is that, next to antisocial personality disorder, borderline has become one of the best-documented and validated personality disorders (Perry and Vaillant 1989). One important shift has been that good descriptive studies have gradually led to studies of etiological factors, such as childhood physical and sexual abuse, and severe neglect (Herman et al. 1989; Perry and Herman 1992), which in turn have led to empirically based treatment approaches (Herman 1992; Perry et al. 1990). Despite inclusion in The Diagnostic and Statistical Manual of Mental Disorders (DSM-III and DSM-III-R), narcissistic personality is still at the beginning of this process of description, empirical testing, and validation (Gunderson et al. 1991). This study empirically examines three descriptions of narcissistic personality in order to look for common underlying dimensions that may have etiological and treatment significance.

  6. Attitude towards Pre-Marital Genetic Screening among Students of Osun State Polytechnics in Nigeria

    ERIC Educational Resources Information Center

    Odelola, J. O.; Adisa, O.; Akintaro, O. A.

    2013-01-01

    This study investigated the attitude towards pre-marital genetic screening among students of Osun State Polytechnics. Descriptive survey design was used for the study. The instrument for data collection was self developed and structured questionnaire in four-point likert scale format. Descriptive statistics of frequency count and percentages were…

  7. Basic School Teachers' Perceptions about Curriculum Design in Ghana

    ERIC Educational Resources Information Center

    Abudu, Amadu Musah; Mensah, Mary Afi

    2016-01-01

    This study focused on teachers' perceptions about curriculum design and barriers to their participation. The sample size was 130 teachers who responded to a questionnaire. The analyses made use of descriptive statistics and descriptions. The study found that the level of teachers' participation in curriculum design is low. The results further…

  8. Quantum-like model for the adaptive dynamics of the genetic regulation of E. coli's metabolism of glucose/lactose.

    PubMed

    Asano, Masanari; Basieva, Irina; Khrennikov, Andrei; Ohya, Masanori; Tanaka, Yoshiharu; Yamato, Ichiro

    2012-06-01

    We developed a quantum-like model describing the gene regulation of glucose/lactose metabolism in a bacterium, Escherichia coli. Our quantum-like model can be considered as a kind of the operational formalism for microbiology and genetics. Instead of trying to describe processes in a cell in the very detail, we propose a formal operator description. Such a description may be very useful in situation in which the detailed description of processes is impossible or extremely complicated. We analyze statistical data obtained from experiments, and we compute the degree of E. coli's preference within adaptive dynamics. It is known that there are several types of E. coli characterized by the metabolic system. We demonstrate that the same type of E. coli can be described by the well determined operators; we find invariant operator quantities characterizing each type. Such invariant quantities can be calculated from the obtained statistical data.

  9. Notes on stochastic (bio)-logic gates: computing with allosteric cooperativity

    PubMed Central

    Agliari, Elena; Altavilla, Matteo; Barra, Adriano; Dello Schiavo, Lorenzo; Katz, Evgeny

    2015-01-01

    Recent experimental breakthroughs have finally allowed to implement in-vitro reaction kinetics (the so called enzyme based logic) which code for two-inputs logic gates and mimic the stochastic AND (and NAND) as well as the stochastic OR (and NOR). This accomplishment, together with the already-known single-input gates (performing as YES and NOT), provides a logic base and paves the way to the development of powerful biotechnological devices. However, as biochemical systems are always affected by the presence of noise (e.g. thermal), standard logic is not the correct theoretical reference framework, rather we show that statistical mechanics can work for this scope: here we formulate a complete statistical mechanical description of the Monod-Wyman-Changeaux allosteric model for both single and double ligand systems, with the purpose of exploring their practical capabilities to express noisy logical operators and/or perform stochastic logical operations. Mixing statistical mechanics with logics, and testing quantitatively the resulting findings on the available biochemical data, we successfully revise the concept of cooperativity (and anti-cooperativity) for allosteric systems, with particular emphasis on its computational capabilities, the related ranges and scaling of the involved parameters and its differences with classical cooperativity (and anti-cooperativity). PMID:25976626

  10. Notes on stochastic (bio)-logic gates: computing with allosteric cooperativity.

    PubMed

    Agliari, Elena; Altavilla, Matteo; Barra, Adriano; Dello Schiavo, Lorenzo; Katz, Evgeny

    2015-05-15

    Recent experimental breakthroughs have finally allowed to implement in-vitro reaction kinetics (the so called enzyme based logic) which code for two-inputs logic gates and mimic the stochastic AND (and NAND) as well as the stochastic OR (and NOR). This accomplishment, together with the already-known single-input gates (performing as YES and NOT), provides a logic base and paves the way to the development of powerful biotechnological devices. However, as biochemical systems are always affected by the presence of noise (e.g. thermal), standard logic is not the correct theoretical reference framework, rather we show that statistical mechanics can work for this scope: here we formulate a complete statistical mechanical description of the Monod-Wyman-Changeaux allosteric model for both single and double ligand systems, with the purpose of exploring their practical capabilities to express noisy logical operators and/or perform stochastic logical operations. Mixing statistical mechanics with logics, and testing quantitatively the resulting findings on the available biochemical data, we successfully revise the concept of cooperativity (and anti-cooperativity) for allosteric systems, with particular emphasis on its computational capabilities, the related ranges and scaling of the involved parameters and its differences with classical cooperativity (and anti-cooperativity).

  11. Notes on stochastic (bio)-logic gates: computing with allosteric cooperativity

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Altavilla, Matteo; Barra, Adriano; Dello Schiavo, Lorenzo; Katz, Evgeny

    2015-05-01

    Recent experimental breakthroughs have finally allowed to implement in-vitro reaction kinetics (the so called enzyme based logic) which code for two-inputs logic gates and mimic the stochastic AND (and NAND) as well as the stochastic OR (and NOR). This accomplishment, together with the already-known single-input gates (performing as YES and NOT), provides a logic base and paves the way to the development of powerful biotechnological devices. However, as biochemical systems are always affected by the presence of noise (e.g. thermal), standard logic is not the correct theoretical reference framework, rather we show that statistical mechanics can work for this scope: here we formulate a complete statistical mechanical description of the Monod-Wyman-Changeaux allosteric model for both single and double ligand systems, with the purpose of exploring their practical capabilities to express noisy logical operators and/or perform stochastic logical operations. Mixing statistical mechanics with logics, and testing quantitatively the resulting findings on the available biochemical data, we successfully revise the concept of cooperativity (and anti-cooperativity) for allosteric systems, with particular emphasis on its computational capabilities, the related ranges and scaling of the involved parameters and its differences with classical cooperativity (and anti-cooperativity).

  12. Sampling errors for satellite-derived tropical rainfall - Monte Carlo study using a space-time stochastic model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.

    1990-01-01

    Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.

  13. The Relationship Between Hospital Construction and High-Risk Infant Auditory Function at NICU Discharge: A Retrospective Descriptive Cohort Study.

    PubMed

    Willis, Valerie

    2018-04-01

    To describe the difference in auditory function at neonatal intensive care unit (NICU) discharge between high-risk infant cases exposed to hospital construction during NICU stay and those not exposed. Noise produced by routine NICU caregiving exceeds recommended intensity. As California hospitals undergo construction to meet seismic safety regulations, vulnerable neonates are potentially exposed to even higher levels of noise. Ramifications are unknown. Retrospective data-based descriptive cohort design was used to compare high-risk infant auditory function at NICU discharge between hospital construction exposed and unexposed groups. N = 540 infant cases (243 construction exposed and 297 unexposed controls). Infant cases born and discharged from the study site NICU in the year 2010 (unexposed) and year 2015 (exposed) and received a newborn hearing screening by automated auditory brainstem evoked response (ABER) prior to discharge with results reported. Infant cases excluded: hearing screen results by ABER unavailable, potentially confounding characteristics (congenital infection, major anomalies including cleft lip and/or palate), and transferred into or out of the study site. ABER. descriptive statistics (SPSS Version 24.0), hypothesis testing, correlation, and logistic regression. The difference in auditory function at NICU discharge between high-risk infant cases exposed to hospital construction noise and those unexposed was statistically insignificant, χ 2 = 1.666, df = 4, p = .1968, 95% confidence interval [-0.635, 2.570]. More research is needed to better understand whether hospital construction exposure during NICU admission negatively affects high-risk infant auditory function. Findings may catalyze theory development, future research, and child health policy.

  14. Estimation of confidence limits for descriptive indexes derived from autoregressive analysis of time series: Methods and application to heart rate variability.

    PubMed

    Beda, Alessandro; Simpson, David M; Faes, Luca

    2017-01-01

    The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings.

  15. Estimation of confidence limits for descriptive indexes derived from autoregressive analysis of time series: Methods and application to heart rate variability

    PubMed Central

    2017-01-01

    The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings. PMID:28968394

  16. Using statistical process control for monitoring the prevalence of hospital-acquired pressure ulcers.

    PubMed

    Kottner, Jan; Halfens, Ruud

    2010-05-01

    Institutionally acquired pressure ulcers are used as outcome indicators to assess the quality of pressure ulcer prevention programs. Determining whether quality improvement projects that aim to decrease the proportions of institutionally acquired pressure ulcers lead to real changes in clinical practice depends on the measurement method and statistical analysis used. To examine whether nosocomial pressure ulcer prevalence rates in hospitals in the Netherlands changed, a secondary data analysis using different statistical approaches was conducted of annual (1998-2008) nationwide nursing-sensitive health problem prevalence studies in the Netherlands. Institutions that participated regularly in all survey years were identified. Risk-adjusted nosocomial pressure ulcers prevalence rates, grade 2 to 4 (European Pressure Ulcer Advisory Panel system) were calculated per year and hospital. Descriptive statistics, chi-square trend tests, and P charts based on statistical process control (SPC) were applied and compared. Six of the 905 healthcare institutions participated in every survey year and 11,444 patients in these six hospitals were identified as being at risk for pressure ulcers. Prevalence rates per year ranged from 0.05 to 0.22. Chi-square trend tests revealed statistically significant downward trends in four hospitals but based on SPC methods, prevalence rates of five hospitals varied by chance only. Results of chi-square trend tests and SPC methods were not comparable, making it impossible to decide which approach is more appropriate. P charts provide more valuable information than single P values and are more helpful for monitoring institutional performance. Empirical evidence about the decrease of nosocomial pressure ulcer prevalence rates in the Netherlands is contradictory and limited.

  17. Quasi-Monochromatic Visual Environments and the Resting Point of Accommodation

    DTIC Science & Technology

    1988-01-01

    accommodation. No statistically significant differences were revealed to support the possibility of color mediated differential regression to resting...discussed with respect to the general findings of the total sample as well as the specific behavior of individual participants. The summarized statistics ...remaining ten varied considerably with respect to the averaged trends reported in the above descriptive statistics as well as with respect to precision

  18. Performing Inferential Statistics Prior to Data Collection

    ERIC Educational Resources Information Center

    Trafimow, David; MacDonald, Justin A.

    2017-01-01

    Typically, in education and psychology research, the investigator collects data and subsequently performs descriptive and inferential statistics. For example, a researcher might compute group means and use the null hypothesis significance testing procedure to draw conclusions about the populations from which the groups were drawn. We propose an…

  19. Inside Rural Pennsylvania: A Statistical Profile.

    ERIC Educational Resources Information Center

    Center for Rural Pennsylvania, Harrisburg.

    Graphs, data tables, maps, and written descriptions give a statistical overview of rural Pennsylvania. A section on rural demographics covers population changes, racial and ethnic makeup, age cohorts, and families and income. Pennsylvania's rural population, the nation's largest, has increased more than its urban population since 1950, with the…

  20. Education Statistics Quarterly, Summer 2002.

    ERIC Educational Resources Information Center

    Dillow, Sally, Ed.

    2002-01-01

    This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data products, and funding opportunities developed over a 3-month period. Each issue also contains a message…

  1. Education Statistics Quarterly, Spring 2002.

    ERIC Educational Resources Information Center

    Dillow, Sally, Ed.

    2002-01-01

    This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data products, and funding opportunities developed over a 3-month period. Each issue also contains a message…

  2. From creation and annihilation operators to statistics

    NASA Astrophysics Data System (ADS)

    Hoyuelos, M.

    2018-01-01

    A procedure to derive the partition function of non-interacting particles with exotic or intermediate statistics is presented. The partition function is directly related to the associated creation and annihilation operators that obey some specific commutation or anti-commutation relations. The cases of Gentile statistics, quons, Polychronakos statistics, and ewkons are considered. Ewkons statistics was recently derived from the assumption of free diffusion in energy space (Hoyuelos and Sisterna, 2016); an ideal gas of ewkons has negative pressure, a feature that makes them suitable for the description of dark energy.

  3. Influences of a Church-Based Intervention on Falls Risk Among Seniors.

    PubMed

    Briggs, Morgan; Morzinski, Jeffrey A; Ellis, Julie

    2017-08-01

    Prior studies illustrate that community-based programs effectively decrease falls risk in older adults and that faith-based programs improve health behaviors. The literature is unclear whether faith-based initiatives reduce seniors' fall risks. To tackle this gap, a long-term partnership led by 10 urban churches, a nearby nursing school, and a medical school developed a study with 3 objectives: determine baseline health concerns associated with falls (eg, depression, polypharmacy), implement a nurse-led, faith-based health education initiative for community-dwelling African American seniors at-risk of hospitalization, and assess pre- to post -program fall frequency. The 100 Healthy, At-Risk Families study team implemented 8 monthly educational health sessions promoting self-care and social support. Community nurses led the 60- to 90-minute sessions at each of 10 churches. To collect study data, nurses interviewed enrolled seniors pre- and post-intervention. Descriptive and comparison statistics were analyzed in Excel and Statistical Package for Social Sciences. Senior data at baseline found high rates of polypharmacy and physical imbalance, and no significant depression or gaps in social support. There was not a statistically significant change pre- to post-program in fall frequency "in prior year." Study findings reveal insights about African American senior health and fall risks. Church settings may provide a protective, psychosocial buffer for seniors, while polypharmacy and mobility/balance concerns indicate need for continued attention to fall risks. No increase in pre- to post-program falls was encouraging.

  4. Protein and gene model inference based on statistical modeling in k-partite graphs.

    PubMed

    Gerster, Sarah; Qeli, Ermir; Ahrens, Christian H; Bühlmann, Peter

    2010-07-06

    One of the major goals of proteomics is the comprehensive and accurate description of a proteome. Shotgun proteomics, the method of choice for the analysis of complex protein mixtures, requires that experimentally observed peptides are mapped back to the proteins they were derived from. This process is also known as protein inference. We present Markovian Inference of Proteins and Gene Models (MIPGEM), a statistical model based on clearly stated assumptions to address the problem of protein and gene model inference for shotgun proteomics data. In particular, we are dealing with dependencies among peptides and proteins using a Markovian assumption on k-partite graphs. We are also addressing the problems of shared peptides and ambiguous proteins by scoring the encoding gene models. Empirical results on two control datasets with synthetic mixtures of proteins and on complex protein samples of Saccharomyces cerevisiae, Drosophila melanogaster, and Arabidopsis thaliana suggest that the results with MIPGEM are competitive with existing tools for protein inference.

  5. Expert system and process optimization techniques for real-time monitoring and control of plasma processes

    NASA Astrophysics Data System (ADS)

    Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.

    1991-03-01

    To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).

  6. Nurses' foot care activities in home health care.

    PubMed

    Stolt, Minna; Suhonen, Riitta; Puukka, Pauli; Viitanen, Matti; Voutilainen, Päivi; Leino-Kilpi, Helena

    2013-01-01

    This study described the basic foot care activities performed by nurses and factors associated with these in the home care of older people. Data were collected from nurses (n=322) working in nine public home care agencies in Finland using the Nurses' Foot Care Activities Questionnaire (NFAQ). Data were analyzed statistically using descriptive statistics and multivariate liner models. Although some of the basic foot care activities of nurses reported using were outdated, the majority of foot care activities were consistent with recommendations in foot care literature. Longer working experience, referring patients with foot problems to a podiatrist and physiotherapist, and patient education in wart and nail care were associated with a high score for adequate foot care activities. Continuing education should focus on updating basic foot care activities and increasing the use of evidence-based foot care methods. Also, geriatric nursing research should focus in intervention research to improve the use of evidence-based basic foot care activities. Copyright © 2013 Mosby, Inc. All rights reserved.

  7. Detection and Estimation of an Optical Image by Photon-Counting Techniques. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wang, Lily Lee

    1973-01-01

    Statistical description of a photoelectric detector is given. The photosensitive surface of the detector is divided into many small areas, and the moment generating function of the photo-counting statistic is derived for large time-bandwidth product. The detection of a specified optical image in the presence of the background light by using the hypothesis test is discussed. The ideal detector based on the likelihood ratio from a set of numbers of photoelectrons ejected from many small areas of the photosensitive surface is studied and compared with the threshold detector and a simple detector which is based on the likelihood ratio by counting the total number of photoelectrons from a finite area of the surface. The intensity of the image is assumed to be Gaussian distributed spatially against the uniformly distributed background light. The numerical approximation by the method of steepest descent is used, and the calculations of the reliabilities for the detectors are carried out by a digital computer.

  8. Recognition of speaker-dependent continuous speech with KEAL

    NASA Astrophysics Data System (ADS)

    Mercier, G.; Bigorgne, D.; Miclet, L.; Le Guennec, L.; Querre, M.

    1989-04-01

    A description of the speaker-dependent continuous speech recognition system KEAL is given. An unknown utterance, is recognized by means of the followng procedures: acoustic analysis, phonetic segmentation and identification, word and sentence analysis. The combination of feature-based, speaker-independent coarse phonetic segmentation with speaker-dependent statistical classification techniques is one of the main design features of the acoustic-phonetic decoder. The lexical access component is essentially based on a statistical dynamic programming technique which aims at matching a phonemic lexical entry containing various phonological forms, against a phonetic lattice. Sentence recognition is achieved by use of a context-free grammar and a parsing algorithm derived from Earley's parser. A speaker adaptation module allows some of the system parameters to be adjusted by matching known utterances with their acoustical representation. The task to be performed, described by its vocabulary and its grammar, is given as a parameter of the system. Continuously spoken sentences extracted from a 'pseudo-Logo' language are analyzed and results are presented.

  9. Statistical methods for detecting and comparing periodic data and their application to the nycthemeral rhythm of bodily harm: A population based study

    PubMed Central

    2010-01-01

    Background Animals, including humans, exhibit a variety of biological rhythms. This article describes a method for the detection and simultaneous comparison of multiple nycthemeral rhythms. Methods A statistical method for detecting periodic patterns in time-related data via harmonic regression is described. The method is particularly capable of detecting nycthemeral rhythms in medical data. Additionally a method for simultaneously comparing two or more periodic patterns is described, which derives from the analysis of variance (ANOVA). This method statistically confirms or rejects equality of periodic patterns. Mathematical descriptions of the detecting method and the comparing method are displayed. Results Nycthemeral rhythms of incidents of bodily harm in Middle Franconia are analyzed in order to demonstrate both methods. Every day of the week showed a significant nycthemeral rhythm of bodily harm. These seven patterns of the week were compared to each other revealing only two different nycthemeral rhythms, one for Friday and Saturday and one for the other weekdays. PMID:21059197

  10. A statistical mechanics model for free-for-all airplane passenger boarding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steffen, Jason H.; /Fermilab

    2008-08-01

    I discuss a model for free-for-all passenger boarding which is employed by some discount air carriers. The model is based on the principles of statistical mechanics where each seat in the aircraft has an associated energy which reflects the preferences of travelers. As each passenger enters the airplane they select their seats using Boltzmann statistics, proceed to that location, load their luggage, sit down, and the partition function seen by remaining passengers is modified to reflect this fact. I discuss the various model parameters and make qualitative comparisons of this passenger boarding model with those that involve assigned seats. Themore » model can be used to predict the probability that certain seats will be occupied at different times during the boarding process. These results might provide a useful description of this boarding method. The model is a relatively unusual application of undergraduate level physics and describes a situation familiar to many students and faculty.« less

  11. Profile Of 'Original Articles' Published In 2016 By The Journal Of Ayub Medical College, Pakistan.

    PubMed

    Shaikh, Masood Ali

    2018-01-01

    Journal of Ayub Medical College (JAMC) is the only Medline indexed biomedical journal of Pakistan that is edited and published by a medical college. Assessing the trends of study designs employed, statistical methods used, and statistical analysis software used in the articles of medical journals help understand the sophistication of research published. The objectives of this descriptive study were to assess all original articles published by JAMC in the year 2016. JAMC published 147 original articles in the year 2016. The most commonly used study design was crosssectional studies, with 64 (43.5%) articles reporting its use. Statistical tests involving bivariate analysis were most common and reported by 73 (49.6%) articles. Use of SPSS software was reported by 109 (74.1%) of articles. Most 138 (93.9%) of the original articles published were based on studies conducted in Pakistan. The number and sophistication of analysis reported in JAMC increased from year 2014 to 2016.

  12. Parallel auto-correlative statistics with VTK.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  13. The evaluation of reproductive health PhD program in Iran: The input indicators analysis.

    PubMed

    AbdiShahshahani, Mahshid; Ehsanpour, Soheila; Yamani, Nikoo; Kohan, Shahnaz

    2014-11-01

    Appropriate quality achievement of a PhD program requires frequent assessment and discovering the shortcomings in the program. Inputs, which are important elements of the curriculum, are frequently missed in evaluations. The purpose of this study was to evaluate the input indicators of reproductive health PhD program in Iran based on the Context, Input, Process, and Product (CIPP) evaluation model. This is a descriptive and evaluative study based on the CIPP evaluation model. It was conducted in 2013 in four Iranian schools of nursing and midwifery of medical sciences universities. Statistical population consisted of four groups: heads of departments (n = 5), faculty members (n = 18), graduates (n = 12), and PhD students of reproductive health (n = 54). Data collection tools were five separate questionnaires including 37 indicators that were developed by the researcher. Content and face validity were evaluated based on the experts' indications. The Cronbach's alpha coefficient was calculated in order to obtain the reliability of the questionnaires. Collected data were analyzed by SPSS software. Data were analyzed by descriptive statistics (mean, frequency, percentage, and standard deviation), and one-way analysis of variance (ANOVA) and least significant difference (LSD) post hoc tests to compare means between groups. The results of the study indicated that the highest percentage of the heads of departments (80%), graduates (66.7%), and students (68.5%) evaluated the status of input indicators of reproductive health PhD program as relatively appropriate, while most of the faculties (66.7%) evaluated that as appropriate. It is suggested to explore the reasons for relatively appropriate evaluation of input indicators by further academic researches and improve the reproductive health PhD program accordingly.

  14. Integrating NASA Dryden Research Endeavors into the Teaching-Learning of Mathematics in the K-12 Classroom via the WWW

    NASA Technical Reports Server (NTRS)

    Ward, Robin A.

    2002-01-01

    The primary goal of this project was to continue populating the currently existing web site developed in 1998 in conjunction with the NASA Dryden Flight Research Center and California Polytechnic State University, with more mathematics lesson plans and activities that K-12 teachers, students, home-schoolers, and parents could access. All of the activities, while demonstrating some mathematical topic, also showcase the research endeavors of the NASA Dryden Flight Research Center. The website is located at: http://daniel.calpoly.edu/dfrc/Robin. The secondary goal of this project was to share the web-based activities with educators at various conferences and workshops. To address the primary goal of this project, over the past year, several new activities were posted on the web site and some of the existing activities were enhanced to contain more video clips, photos, and materials for teachers. To address the project's secondary goal, the web-based activities were showcased at several conferences and workshops. Additionally, in order to measure and assess the outreach impact of the web site, a link to the web site hitbox.com was established in April 2001, which allowed for the collection of traffic statistics against the web site (such as the domains of visitors, the frequency of visitors to this web site, etc.) Provided is a description of some of the newly created activities posted on the web site during the project period of 2001-2002, followed by a description of the conferences and workshops at which some of the web-based activities were showcased. Next is a brief summary of the web site's traffic statistics demonstrating its worldwide educational impact, followed by a listing of some of the awards and accolades the web site has received.

  15. The classification of PM10 concentrations in Johor Based on Seasonal Monsoons

    NASA Astrophysics Data System (ADS)

    Hamid, Hazrul Abdul; Hanafi Rahmat, Muhamad; Aisyah Sapani, Siti

    2018-04-01

    Air is the most important living resource in life. Contaminated air could adversely affect human health and the environment, especially during the monsoon season. Contamination occurs as a result of human action and haze. There are several pollutants present in the air where one of them is PM10. Secondary data was obtained from the Department of Environment from 2010 until 2014 and was analyzed using the hourly average of PM10 concentrations. This paper examined the relation between PM10 concentrations and the monsoon seasons (Northeast Monsoon and Southwest Monsoon) in Larkin and Pasir Gudang. It was expected that the concentration of PM10 would be higher during the Southwest Monsoon as it is a dry season. The data revealed that the highest PM10 concentrations were recorded between 2010 to 2014 during this particular monsoon season. The characteristics of PM10 concentration were compared using descriptive statistics based on the monsoon seasons and classified using the hierarchical cluster analysis (Ward Methods). The annual average of PM10 concentration during the Southwest Monsoon had exceeded the standard set by the Malaysia Ambient Air Quality Guidelines (50 μg/m3) while the PM10 concentration during the Northeast Monsoon was below the acceptable level for both stations. The dendrogram displayed showed two clusters for each monsoon season for both stations excepted for the PM10 concentration during the Northeast Monsoon in Larkin which was classified into three clusters due to the haze in 2010. Overall, the concentration of PM10 in 2013 was higher based on the clustering shown for every monsoon season at both stations according to the characteristics in the descriptive statistics.

  16. Thermodynamics-based models of transcriptional regulation with gene sequence.

    PubMed

    Wang, Shuqiang; Shen, Yanyan; Hu, Jinxing

    2015-12-01

    Quantitative models of gene regulatory activity have the potential to improve our mechanistic understanding of transcriptional regulation. However, the few models available today have been based on simplistic assumptions about the sequences being modeled or heuristic approximations of the underlying regulatory mechanisms. In this work, we have developed a thermodynamics-based model to predict gene expression driven by any DNA sequence. The proposed model relies on a continuous time, differential equation description of transcriptional dynamics. The sequence features of the promoter are exploited to derive the binding affinity which is derived based on statistical molecular thermodynamics. Experimental results show that the proposed model can effectively identify the activity levels of transcription factors and the regulatory parameters. Comparing with the previous models, the proposed model can reveal more biological sense.

  17. Classification software technique assessment

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.; Atkinson, R.; Dasarathy, B. V.; Lybanon, M.; Ramapryian, H. K.

    1976-01-01

    A catalog of software options is presented for the use of local user communities to obtain software for analyzing remotely sensed multispectral imagery. The resources required to utilize a particular software program are described. Descriptions of how a particular program analyzes data and the performance of that program for an application and data set provided by the user are shown. An effort is made to establish a statistical performance base for various software programs with regard to different data sets and analysis applications, to determine the status of the state-of-the-art.

  18. LACIE performance predictor final operational capability program description, volume 3

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The requirements and processing logic for the LACIE Error Model program (LEM) are described. This program is an integral part of the Large Area Crop Inventory Experiment (LACIE) system. LEM is that portion of the LPP (LACIE Performance Predictor) which simulates the sample segment classification, strata yield estimation, and production aggregation. LEM controls repetitive Monte Carlo trials based on input error distributions to obtain statistical estimates of the wheat area, yield, and production at different levels of aggregation. LEM interfaces with the rest of the LPP through a set of data files.

  19. Peer Review in Radiology: A Resident and Fellow Perspective.

    PubMed

    Grenville, Jeffrey; Doucette-Preville, David; Vlachou, Paraskevi A; Mnatzakanian, Gevork N; Raikhlin, Antony; Colak, Errol

    2016-02-01

    The purpose of this study was to explore Canadian radiology residents' and fellows' understanding, attitudes, opinions, and preferences toward peer review. An Internet-based anonymous questionnaire designed to understand one's familiarity, attitudes, opinions, and preferences toward peer review was distributed to radiology residents and fellows across Canada. Data were analyzed using descriptive statistics, and answers were stratified by level of training. A total of 136 trainees responded to the survey with 92 completed survey responses available for descriptive statistics. Approximately half of respondents are familiar with peer review (49%), and 39% of trainees are involved in peer review. Most respondents (92%) expressed an interest in learning more about peer review; believe that it should be incorporated into the residency training curriculum (86%), be mandatory (72%), and that current participation will increase odds of future participation (91%). Most trainees (80%) are comfortable advising one another about errors, but less comfortable advising staff (21%). Residents and fellows welcome the opportunity to learn more about peer review and believe it should be incorporated into the residency training curriculum. Understanding the attitudes and perceptions held by trainees regarding peer review is important, as a means to optimize education and maximize current and future participation in peer review. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  20. Psychometric properties of the Danish student well-being questionnaire assessed in >250,000 student responders.

    PubMed

    Niclasen, Janni; Keilow, Maria; Obel, Carsten

    2018-05-01

    Well-being is considered a prerequisite for learning. The Danish Ministry of Education initiated the development of a new 40-item student well-being questionnaire in 2014 to monitor well-being among all Danish public school students on a yearly basis. The aim of this study was to investigate the basic psychometric properties of this questionnaire. We used the data from the 2015 Danish student well-being survey for 268,357 students in grades 4-9 (about 85% of the study population). Descriptive statistics, exploratory factor analyses, confirmatory factor analyses and Cronbach's α reliability measures were used in the analyses. The factor analyses did not unambiguously support one particular factor structure. However, based on the basic descriptive statistics, exploratory factor analyses, confirmatory factor analyses, the semantics of the individual items and Cronbach's α, we propose a four-factor structure including 27 of the 40 items originally proposed. The four scales measure school connectedness, learning self-efficacy, learning environment and classroom management. Two bullying items and two psychosomatic items should be considered separately, leaving 31 items in the questionnaire. The proposed four-factor structure addresses central aspects of well-being, which, if used constructively, may support public schools' work to increase levels of student well-being.

  1. Resolving metal-molecule interfaces at single-molecule junctions

    NASA Astrophysics Data System (ADS)

    Komoto, Yuki; Fujii, Shintaro; Nakamura, Hisao; Tada, Tomofumi; Nishino, Tomoaki; Kiguchi, Manabu

    2016-05-01

    Electronic and structural detail at the electrode-molecule interface have a significant influence on charge transport across molecular junctions. Despite the decisive role of the metal-molecule interface, a complete electronic and structural characterization of the interface remains a challenge. This is in no small part due to current experimental limitations. Here, we present a comprehensive approach to obtain a detailed description of the metal-molecule interface in single-molecule junctions, based on current-voltage (I-V) measurements. Contrary to conventional conductance studies, this I-V approach provides a correlated statistical description of both, the degree of electronic coupling across the metal-molecule interface, and the energy alignment between the conduction orbital and the Fermi level of the electrode. This exhaustive statistical approach was employed to study single-molecule junctions of 1,4-benzenediamine (BDA), 1,4-butanediamine (C4DA), and 1,4-benzenedithiol (BDT). A single interfacial configuration was observed for both BDA and C4DA junctions, while three different interfacial arrangements were resolved for BDT. This multiplicity is due to different molecular adsorption sites on the Au surface namely on-top, hollow, and bridge. Furthermore, C4DA junctions present a fluctuating I-V curve arising from the greater conformational freedom of the saturated alkyl chain, in sharp contrast with the rigid aromatic backbone of both BDA and BDT.

  2. WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.

    PubMed

    Grech, Victor

    2018-03-01

    The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Education Statistics Quarterly, Fall 2002.

    ERIC Educational Resources Information Center

    Dillow, Sally, Ed.

    2003-01-01

    This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released in a 3-month period. Each issue also contains a message from the NCES on a timely…

  4. BLS Machine-Readable Data and Tabulating Routines.

    ERIC Educational Resources Information Center

    DiFillipo, Tony

    This report describes the machine-readable data and tabulating routines that the Bureau of Labor Statistics (BLS) is prepared to distribute. An introduction discusses the LABSTAT (Labor Statistics) database and the BLS policy on release of unpublished data. Descriptions summarizing data stored in 25 files follow this format: overview, data…

  5. Education Statistics Quarterly, Fall 2001.

    ERIC Educational Resources Information Center

    Dillow, Sally, Ed.

    2001-01-01

    The publication gives a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data products, and funding opportunities developed over a 3-month period. Each issue also contains a message from…

  6. Evaluating Independent Proportions for Statistical Difference, Equivalence, Indeterminacy, and Trivial Difference Using Inferential Confidence Intervals

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2009-01-01

    Tryon presented a graphic inferential confidence interval (ICI) approach to analyzing two independent and dependent means for statistical difference, equivalence, replication, indeterminacy, and trivial difference. Tryon and Lewis corrected the reduction factor used to adjust descriptive confidence intervals (DCIs) to create ICIs and introduced…

  7. Examples of Data Analysis with SPSS-X.

    ERIC Educational Resources Information Center

    MacFarland, Thomas W.

    Intended for classroom use only, these unpublished notes contain computer lessons on descriptive statistics using SPSS-X Release 3.0 for VAX/UNIX. Statistical measures covered include Chi-square analysis; Spearman's rank correlation coefficient; Student's t-test with two independent samples; Student's t-test with a paired sample; One-way analysis…

  8. Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation

    ERIC Educational Resources Information Center

    Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann

    2017-01-01

    This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…

  9. Education Statistics Quarterly. Volume 5, Issue 1.

    ERIC Educational Resources Information Center

    Dillow, Sally, Ed.

    2003-01-01

    This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data product, and funding opportunities developed over a 3-month period. Each issue also contains a message…

  10. Education Statistics Quarterly, Winter 2001.

    ERIC Educational Resources Information Center

    Dillow, Sally, Ed.

    2002-01-01

    This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released in a 3-month period. Each issue also contains a message from the NCES on a timely…

  11. 76 FR 60817 - Notice of Proposed Information Collection Requests

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-30

    ... Statistics (NCES) is seeking a three-year clearance for a new survey data collection for the College... most recent data are available. The clearance being requested is to survey the institutions on this... and sector specific findings from the CATE using descriptive statistics. The main cost areas showing...

  12. Basic Statistical Concepts and Methods for Earth Scientists

    USGS Publications Warehouse

    Olea, Ricardo A.

    2008-01-01

    INTRODUCTION Statistics is the science of collecting, analyzing, interpreting, modeling, and displaying masses of numerical data primarily for the characterization and understanding of incompletely known systems. Over the years, these objectives have lead to a fair amount of analytical work to achieve, substantiate, and guide descriptions and inferences.

  13. North Carolina Migrant Education Program. 1971 Project Evaluation Reports, Vol. I.

    ERIC Educational Resources Information Center

    North Carolina State Dept. of Public Instruction, Raleigh.

    Evaluation reports for 10 of the 23 1971 Summer Migrant Projects in North Carolina are presented in Volume I of this compilation. Each report contains the following information: (1) descriptive statistics and results of student achievement; (2) description of the project as obtained from site team reports and other available information; and (3)…

  14. Policymakers Dependence on Evidence in Education Decision Making in Oyo State Ministry of Education

    ERIC Educational Resources Information Center

    Babalola, Joel B.; Gbolahan, Sowunmi

    2016-01-01

    This study investigated policymaker dependence on evidence in education decision making in Oyo State Ministry of Education. The study was conducted under a descriptive survey design, 44 out of the 290 policymakers of the Ministry and Board of Education across the State were purposively selected for the study. Descriptive statistics of frequency…

  15. Evidence of codon usage in the nearest neighbor spacing distribution of bases in bacterial genomes

    NASA Astrophysics Data System (ADS)

    Higareda, M. F.; Geiger, O.; Mendoza, L.; Méndez-Sánchez, R. A.

    2012-02-01

    Statistical analysis of whole genomic sequences usually assumes a homogeneous nucleotide density throughout the genome, an assumption that has been proved incorrect for several organisms since the nucleotide density is only locally homogeneous. To avoid giving a single numerical value to this variable property, we propose the use of spectral statistics, which characterizes the density of nucleotides as a function of its position in the genome. We show that the cumulative density of bases in bacterial genomes can be separated into an average (or secular) plus a fluctuating part. Bacterial genomes can be divided into two groups according to the qualitative description of their secular part: linear and piecewise linear. These two groups of genomes show different properties when their nucleotide spacing distribution is studied. In order to analyze genomes having a variable nucleotide density, statistically, the use of unfolding is necessary, i.e., to get a separation between the secular part and the fluctuations. The unfolding allows an adequate comparison with the statistical properties of other genomes. With this methodology, four genomes were analyzed Burkholderia, Bacillus, Clostridium and Corynebacterium. Interestingly, the nearest neighbor spacing distributions or detrended distance distributions are very similar for species within the same genus but they are very different for species from different genera. This difference can be attributed to the difference in the codon usage.

  16. Application of pedagogy reflective in statistical methods course and practicum statistical methods

    NASA Astrophysics Data System (ADS)

    Julie, Hongki

    2017-08-01

    Subject Elementary Statistics, Statistical Methods and Statistical Methods Practicum aimed to equip students of Mathematics Education about descriptive statistics and inferential statistics. The students' understanding about descriptive and inferential statistics were important for students on Mathematics Education Department, especially for those who took the final task associated with quantitative research. In quantitative research, students were required to be able to present and describe the quantitative data in an appropriate manner, to make conclusions from their quantitative data, and to create relationships between independent and dependent variables were defined in their research. In fact, when students made their final project associated with quantitative research, it was not been rare still met the students making mistakes in the steps of making conclusions and error in choosing the hypothetical testing process. As a result, they got incorrect conclusions. This is a very fatal mistake for those who did the quantitative research. There were some things gained from the implementation of reflective pedagogy on teaching learning process in Statistical Methods and Statistical Methods Practicum courses, namely: 1. Twenty two students passed in this course and and one student did not pass in this course. 2. The value of the most accomplished student was A that was achieved by 18 students. 3. According all students, their critical stance could be developed by them, and they could build a caring for each other through a learning process in this course. 4. All students agreed that through a learning process that they undergo in the course, they can build a caring for each other.

  17. Designing an Error Resolution Checklist for a Shared Manned-Unmanned Environment

    DTIC Science & Technology

    2010-06-01

    performance during the Olympics. Thank you to Birsen Donmez, who took an active role in my statistics instruction. I appreciate your time and patience...in teaching me the finer details of “varsity statistics ”. Also, thank you for being so responsive through e-mail, even though you are now located in...105! 6.3.! Experiment recommendations and future work................................................ 105! Appendix A: Descriptive Statistics

  18. Deciphering the complex: methodological overview of statistical models to derive OMICS-based biomarkers.

    PubMed

    Chadeau-Hyam, Marc; Campanella, Gianluca; Jombart, Thibaut; Bottolo, Leonardo; Portengen, Lutzen; Vineis, Paolo; Liquet, Benoit; Vermeulen, Roel C H

    2013-08-01

    Recent technological advances in molecular biology have given rise to numerous large-scale datasets whose analysis imposes serious methodological challenges mainly relating to the size and complex structure of the data. Considerable experience in analyzing such data has been gained over the past decade, mainly in genetics, from the Genome-Wide Association Study era, and more recently in transcriptomics and metabolomics. Building upon the corresponding literature, we provide here a nontechnical overview of well-established methods used to analyze OMICS data within three main types of regression-based approaches: univariate models including multiple testing correction strategies, dimension reduction techniques, and variable selection models. Our methodological description focuses on methods for which ready-to-use implementations are available. We describe the main underlying assumptions, the main features, and advantages and limitations of each of the models. This descriptive summary constitutes a useful tool for driving methodological choices while analyzing OMICS data, especially in environmental epidemiology, where the emergence of the exposome concept clearly calls for unified methods to analyze marginally and jointly complex exposure and OMICS datasets. Copyright © 2013 Wiley Periodicals, Inc.

  19. Students' perceptions of clinical teaching and learning strategies: a Pakistani perspective.

    PubMed

    Khan, Basnama Ayaz; Ali, Fauziya; Vazir, Nilofar; Barolia, Rubina; Rehan, Seema

    2012-01-01

    The complexity of the health care environment is increasing with the explosion of technology, coupled with the issues of patients' access, equity, time efficiency, and cost containment. Nursing education must focus on means that enable students to develop the processes of active learning, problem-solving, and critical thinking, in order to enable them to deal with the complexities. This study aims at identifying the nursing students' perceptions about the effectiveness of utilized teaching and learning strategies of clinical education, in improving students' knowledge, skills, and attitudes. A descriptive cross sectional study design was utilized using both qualitative and quantitative approaches. Data were collected from 74 students, using a questionnaire that was developed for the purpose of the study and analyzed using descriptive and non-parametric statistics. The findings revealed that demonstration was the most effective strategy for improving students' skills; reflection, for improving attitudes; and problem based learning and concept map for improving their knowledge. Students' responses to open-ended questions confirmed the effectiveness of these strategies in improving their learning outcomes. Recommendations have been provided based on the findings. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. A descriptive study of baccalaureate nursing students' responses to suicide prevention education.

    PubMed

    Pullen, Julie M; Gilje, Fredricka; Tesar, Emily

    2016-01-01

    Internationally, little is known regarding the amount of educational content on suicide in undergraduate nursing curriculum. The literature conducted found few published research studies on implementation of suicide prevention instruction in baccalaureate nursing curriculum, even though various international healthcare and nursing initiatives address suicide prevention. The aim was to describe senior baccalaureate students' responses to an evidence-based suicide prevention gatekeeper training program entitled Question-Persuade-Refer implemented in a required course. This is a multi-method descriptive study. Data were collected utilizing a pre-post-survey questionnaire administered to 150 students in four classes of a psychiatric nursing course over a two-year period. The quantitative data were statistically significant (p < 0.000) indicating an overall positive rating of the training. From the qualitative data, the main theme was 'becoming capable intervening with persons at risk for suicide'. Students responded very positively to the evidence based suicide prevention gatekeeper training program. The instruction addresses various national initiatives and strategies filling a void in nursing curriculum, as well as empowering students to engage in suicide prevention interventions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Transition from Poissonian to Gaussian-orthogonal-ensemble level statistics in a modified Artin's billiard

    NASA Astrophysics Data System (ADS)

    Csordás, A.; Graham, R.; Szépfalusy, P.; Vattay, G.

    1994-01-01

    One wall of an Artin's billiard on the Poincaré half-plane is replaced by a one-parameter (cp) family of nongeodetic walls. A brief description of the classical phase space of this system is given. In the quantum domain, the continuous and gradual transition from the Poisson-like to Gaussian-orthogonal-ensemble (GOE) level statistics due to the small perturbations breaking the symmetry responsible for the ``arithmetic chaos'' at cp=1 is studied. Another GOE-->Poisson transition due to the mixed phase space for large perturbations is also investigated. A satisfactory description of the intermediate level statistics by the Brody distribution was found in both cases. The study supports the existence of a scaling region around cp=1. A finite-size scaling relation for the Brody parameter as a function of 1-cp and the number of levels considered can be established.

  2. Do non-communicable diseases such as hypertension and diabetes associate with primary open-angle glaucoma? Insights from a case-control study in Nepal.

    PubMed

    Shakya-Vaidya, Suraj; Aryal, Umesh Raj; Upadhyay, Madan; Krettek, Alexandra

    2013-11-04

    Non-communicable diseases (NCDs) such as hypertension and diabetes are rapidly emerging public health problems worldwide, and they associate with primary open-angle glaucoma (POAG). POAG is the most common cause of irreversible blindness. The most effective ways to prevent glaucoma blindness involve identifying high-risk populations and conducting routine screening for early case detection. This study investigated whether POAG associates with hypertension and diabetes in a Nepalese population. To explore the history of systemic illness, our hospital-based case-control study used non-random consecutive sampling in the general eye clinics in three hospitals across Nepal to enroll patients newly diagnosed with POAG and controls without POAG. The study protocol included history taking, ocular examination, and interviews with 173 POAG cases and 510 controls. Data analysis comprised descriptive and inferential statistics. Descriptive statistics computed the percentage, mean, and standard deviation (SD); inferential statistics used McNemar's test to measure associations between diseases. POAG affected males more frequently than females. The odds of members of the Gurung ethnic group having POAG were 2.05 times higher than for other ethnic groups. Hypertension and diabetes were strongly associated with POAG. The overall odds of POAG increased 2.72-fold among hypertensive and 3.50-fold among diabetic patients. POAG associates significantly with hypertension and diabetes in Nepal. Thus, periodic glaucoma screening for hypertension and diabetes patients in addition to opportunistic screening at eye clinics may aid in detecting more POAG cases at an early stage and hence in reducing avoidable blindness.

  3. The Sport Students’ Ability of Literacy and Statistical Reasoning

    NASA Astrophysics Data System (ADS)

    Hidayah, N.

    2017-03-01

    The ability of literacy and statistical reasoning is very important for the students of sport education college due to the materials of statistical learning can be taken from their many activities such as sport competition, the result of test and measurement, predicting achievement based on training, finding connection among variables, and others. This research tries to describe the sport education college students’ ability of literacy and statistical reasoning related to the identification of data type, probability, table interpretation, description and explanation by using bar or pie graphic, explanation of variability, interpretation, the calculation and explanation of mean, median, and mode through an instrument. This instrument is tested to 50 college students majoring in sport resulting only 26% of all students have the ability above 30% while others still below 30%. Observing from all subjects; 56% of students have the ability of identification data classification, 49% of students have the ability to read, display and interpret table through graphic, 27% students have the ability in probability, 33% students have the ability to describe variability, and 16.32% students have the ability to read, count and describe mean, median and mode. The result of this research shows that the sport students’ ability of literacy and statistical reasoning has not been adequate and students’ statistical study has not reached comprehending concept, literary ability trining and statistical rasoning, so it is critical to increase the sport students’ ability of literacy and statistical reasoning

  4. Equilibrium and diffusion studies of metal-hydrogen systems

    NASA Astrophysics Data System (ADS)

    Maroevic, Petar

    Several new methods and models have been developed pertaining to equilibrium properties of hydrogen in random binary substitutional alloys at room and lower temperatures, describing both statistics and kinetics of hydrogen in them. They represent a solution to the problem of the complete Fermi-Dirac description which is physically appropriate for these systems. Hydrogen diffusion which proceeds via lattice assisted quantum tunneling at room and lower temperatures requires a new and different description from the one based on the thermal hopping picture, which pertains only to relatively high temperatures. It is also shown that the analogs of the solution to the Fermi-Dirac problem of hydrogen can be successfully applied to the description of vacancies in a hydrogenated system, a phenomena known to occur due to high hydrogen-vacancy binding energies and the creation of hydrogen-vacancy clusters. The solution based on this model applies to much lower temperatures and higher concentrations than the tradition alone. This methodology has also been applied to the surface problem where very large vacancy and hydrogen concentrations occur. This is of special importance since mechanical properties are known to be greatly affected by the surface. As another consequence of hydrogen induced vacancies, hydrogen induced lattice migration (HILM) occurs. This has been demonstrated in our electrical resistivity study of palladium wires where recrystallization and annealing effects were observed upon hydrogen-heat-treatment (HHT).

  5. TH-CD-207A-07: Prediction of High Dimensional State Subject to Respiratory Motion: A Manifold Learning Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit moremore » descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction reliability in such low dimensional manifold. The fixed-point iterative approach turns out to work well practically for the pre-image recovery. Our approach is particularly suitable to facilitate managing respiratory motion in image-guide radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  6. Designing an 'expert knowledge' based approach for the quantification of historical floods - the case study of the Kinzig catchment in Southwest Germany

    NASA Astrophysics Data System (ADS)

    Bösmeier, Annette; Glaser, Rüdiger; Stahl, Kerstin; Himmelsbach, Iso; Schönbein, Johannes

    2017-04-01

    Future estimations of flood hazard and risk for developing optimal coping and adaption strategies inevitably include considerations of the frequency and magnitude of past events. Methods of historical climatology represent one way of assessing flood occurrences beyond the period of instrumental measurements and can thereby substantially help to extend the view into the past and to improve modern risk analysis. Such historical information can be of additional value and has been used in statistical approaches like Bayesian flood frequency analyses during recent years. However, the derivation of quantitative values from vague descriptive information of historical sources remains a crucial challenge. We explored possibilities of parametrization of descriptive flood related data specifically for the assessment of historical floods in a framework that combines a hermeneutical approach with mathematical and statistical methods. This study forms part of the transnational, Franco-German research project TRANSRISK2 (2014 - 2017), funded by ANR and DFG, with the focus on exploring the floods history of the last 300 years for the regions of Upper and Middle Rhine. A broad data base of flood events had been compiled, dating back to AD 1500. The events had been classified based on hermeneutical methods, depending on intensity, spatial dimension, temporal structure, damages and mitigation measures associated with the specific events. This indexed database allowed the exploration of a link between descriptive data and quantitative information for the overlapping time period of classified floods and instrumental measurements since the end of the 19th century. Thereby, flood peak discharges as a quantitative measure of the severity of a flood were used to assess the discharge intervals for flood classes (upper and lower thresholds) within different time intervals for validating the flood classification, as well as examining the trend in the perception threshold over time. Furthermore, within a suitable time period, flood classes and other quantifiable indicators of flood intensity (number of damaged locations mentioned in historical sources, general availability of reports associated with a specific event) were combined with available peak discharges measurements. We argue that this information can be considered as 'expert knowledge' and used it to develop a fuzzy rule based model for deriving peak discharge estimates of pre-instrumental events that can finally be introduced into a flood frequency analysis.

  7. Group contribution methodology based on the statistical associating fluid theory for heteronuclear molecules formed from Mie segments.

    PubMed

    Papaioannou, Vasileios; Lafitte, Thomas; Avendaño, Carlos; Adjiman, Claire S; Jackson, George; Müller, Erich A; Galindo, Amparo

    2014-02-07

    A generalization of the recent version of the statistical associating fluid theory for variable range Mie potentials [Lafitte et al., J. Chem. Phys. 139, 154504 (2013)] is formulated within the framework of a group contribution approach (SAFT-γ Mie). Molecules are represented as comprising distinct functional (chemical) groups based on a fused heteronuclear molecular model, where the interactions between segments are described with the Mie (generalized Lennard-Jonesium) potential of variable attractive and repulsive range. A key feature of the new theory is the accurate description of the monomeric group-group interactions by application of a high-temperature perturbation expansion up to third order. The capabilities of the SAFT-γ Mie approach are exemplified by studying the thermodynamic properties of two chemical families, the n-alkanes and the n-alkyl esters, by developing parameters for the methyl, methylene, and carboxylate functional groups (CH3, CH2, and COO). The approach is shown to describe accurately the fluid-phase behavior of the compounds considered with absolute average deviations of 1.20% and 0.42% for the vapor pressure and saturated liquid density, respectively, which represents a clear improvement over other existing SAFT-based group contribution approaches. The use of Mie potentials to describe the group-group interaction is shown to allow accurate simultaneous descriptions of the fluid-phase behavior and second-order thermodynamic derivative properties of the pure fluids based on a single set of group parameters. Furthermore, the application of the perturbation expansion to third order for the description of the reference monomeric fluid improves the predictions of the theory for the fluid-phase behavior of pure components in the near-critical region. The predictive capabilities of the approach stem from its formulation within a group-contribution formalism: predictions of the fluid-phase behavior and thermodynamic derivative properties of compounds not included in the development of group parameters are demonstrated. The performance of the theory is also critically assessed with predictions of the fluid-phase behavior (vapor-liquid and liquid-liquid equilibria) and excess thermodynamic properties of a variety of binary mixtures, including polymer solutions, where very good agreement with the experimental data is seen, without the need for adjustable mixture parameters.

  8. Validation of satellite-based rainfall in Kalahari

    NASA Astrophysics Data System (ADS)

    Lekula, Moiteela; Lubczynski, Maciek W.; Shemang, Elisha M.; Verhoef, Wouter

    2018-06-01

    Water resources management in arid and semi-arid areas is hampered by insufficient rainfall data, typically obtained from sparsely distributed rain gauges. Satellite-based rainfall estimates (SREs) are alternative sources of such data in these areas. In this study, daily rainfall estimates from FEWS-RFE∼11 km, TRMM-3B42∼27 km, CMOPRH∼27 km and CMORPH∼8 km were evaluated against nine, daily rain gauge records in Central Kalahari Basin (CKB), over a five-year period, 01/01/2001-31/12/2005. The aims were to evaluate the daily rainfall detection capabilities of the four SRE algorithms, analyze the spatio-temporal variability of rainfall in the CKB and perform bias-correction of the four SREs. Evaluation methods included scatter plot analysis, descriptive statistics, categorical statistics and bias decomposition. The spatio-temporal variability of rainfall, was assessed using the SREs' mean annual rainfall, standard deviation, coefficient of variation and spatial correlation functions. Bias correction of the four SREs was conducted using a Time-Varying Space-Fixed bias-correction scheme. The results underlined the importance of validating daily SREs, as they had different rainfall detection capabilities in the CKB. The FEWS-RFE∼11 km performed best, providing better results of descriptive and categorical statistics than the other three SREs, although bias decomposition showed that all SREs underestimated rainfall. The analysis showed that the most reliable SREs performance analysis indicator were the frequency of "miss" rainfall events and the "miss-bias", as they directly indicated SREs' sensitivity and bias of rainfall detection, respectively. The Time Varying and Space Fixed (TVSF) bias-correction scheme, improved some error measures but resulted in the reduction of the spatial correlation distance, thus increased, already high, spatial rainfall variability of all the four SREs. This study highlighted SREs as valuable source of daily rainfall data providing good spatio-temporal data coverage especially suitable for areas with limited rain gauges, such as the CKB, but also emphasized SREs' drawbacks, creating avenue for follow up research.

  9. Clinical competence of Guatemalan and Mexican physicians for family dysfunction management.

    PubMed

    Cabrera-Pivaral, Carlos Enrique; Orozco-Valerio, María de Jesús; Celis-de la Rosa, Alfredo; Covarrubias-Bermúdez, María de Los Ángeles; Zavala-González, Marco Antonio

    2017-01-01

    To evaluate the clinical competence of Mexican and Guatemalan physicians to management the family dysfunction. Cross comparative study in four care units first in Guadalajara, Mexico, and four in Guatemala, Guatemala, based on a purposeful sampling, involving 117 and 100 physicians, respectively. Clinical competence evaluated by validated instrument integrated for 187 items. Non-parametric descriptive and inferential statistical analysis was performed. The percentage of Mexican physicians with high clinical competence was 13.7%, medium 53%, low 24.8% and defined by random 8.5%. For the Guatemalan physicians'14% was high, average 63%, and 23% defined by random. There were no statistically significant differences between healthcare country units, but between the medium of Mexicans (0.55) and Guatemalans (0.55) (p = 0.02). The proportion of the high clinical competency of Mexican physicians' was as Guatemalans.

  10. A new equation of state Based on Nuclear Statistical Equilibrium for Core-Collapse Simulations

    NASA Astrophysics Data System (ADS)

    Furusawa, Shun; Yamada, Shoichi; Sumiyoshi, Kohsuke; Suzuki, Hideyuki

    2012-09-01

    We calculate a new equation of state for baryons at sub-nuclear densities for the use in core-collapse simulations of massive stars. The formulation is the nuclear statistical equilibrium description and the liquid drop approximation of nuclei. The model free energy to minimize is calculated by relativistic mean field theory for nucleons and the mass formula for nuclei with atomic number up to ~ 1000. We have also taken into account the pasta phase. We find that the free energy and other thermodynamical quantities are not very different from those given in the standard EOSs that adopt the single nucleus approximation. On the other hand, the average mass is systematically different, which may have an important effect on the rates of electron captures and coherent neutrino scatterings on nuclei in supernova cores.

  11. [Methodology of the description of atmospheric air pollution by nitrogen dioxide by land use regression method in Ekaterinburg].

    PubMed

    Antropov, K M; Varaksin, A N

    2013-01-01

    This paper provides the description of Land Use Regression (LUR) modeling and the result of its application in the study of nitrogen dioxide air pollution in Ekaterinburg. The paper describes the difficulties of the modeling for air pollution caused by motor vehicles exhaust, and the ways to address these challenges. To create LUR model of the NO2 air pollution in Ekaterinburg, concentrations of NO2 were measured, data on factors affecting air pollution were collected, a statistical analysis of the data were held. A statistical model of NO2 air pollution (coefficient of determination R2 = 0.70) and a map of pollution were created.

  12. An overview of data acquisition, signal coding and data analysis techniques for MST radars

    NASA Technical Reports Server (NTRS)

    Rastogi, P. K.

    1986-01-01

    An overview is given of the data acquisition, signal processing, and data analysis techniques that are currently in use with high power MST/ST (mesosphere stratosphere troposphere/stratosphere troposphere) radars. This review supplements the works of Rastogi (1983) and Farley (1984) presented at previous MAP workshops. A general description is given of data acquisition and signal processing operations and they are characterized on the basis of their disparate time scales. Then signal coding, a brief description of frequently used codes, and their limitations are discussed, and finally, several aspects of statistical data processing such as signal statistics, power spectrum and autocovariance analysis, outlier removal techniques are discussed.

  13. The influence of socio cultural dynamics on convergence communication of aquaculture agribusiness actors

    NASA Astrophysics Data System (ADS)

    Oktavia, Y.

    2018-03-01

    This research aims to: (1) Analyze the level of socio-cultural dynamics of agibusiness aquaculture actors. (2) Analyze the influence of socio-cultural dynamics on convergence communication of capacity development of aquaculture agribusiness actors.Data was collected by questionnaire and interview of group members on agribusiness. Data analyze was done by descriptive and inferential statistics with using SEM method. The result of descriptive statistics on 284 agribusiness members showed that: Socio-cultural dynamics of agibusiness aquaculture actors was in low category, as shown by lack of the role of customary institutions and quality of local leadership.The communication convergence is significantly and positively influenced by the communication behavior of agribusiness actors in access information.

  14. Using Statistics and Data Mining Approaches to Analyze Male Sexual Behaviors and Use of Erectile Dysfunction Drugs Based on Large Questionnaire Data.

    PubMed

    Qiao, Zhi; Li, Xiang; Liu, Haifeng; Zhang, Lei; Cao, Junyang; Xie, Guotong; Qin, Nan; Jiang, Hui; Lin, Haocheng

    2017-01-01

    The prevalence of erectile dysfunction (ED) has been extensively studied worldwide. Erectile dysfunction drugs has shown great efficacy in preventing male erectile dysfunction. In order to help doctors know drug taken preference of patients and better prescribe, it is crucial to analyze who actually take erectile dysfunction drugs and the relation between sexual behaviors and drug use. Existing clinical studies usually used descriptive statistics and regression analysis based on small volume of data. In this paper, based on big volume of data (48,630 questionnaires), we use data mining approaches besides statistics and regression analysis to comprehensively analyze the relation between male sexual behaviors and use of erectile dysfunction drugs for unravelling the characteristic of patients who take erectile dysfunction drugs. We firstly analyze the impact of multiple sexual behavior factors on whether to use the erectile dysfunction drugs. Then, we explore to mine the Decision Rules for Stratification to discover patients who are more likely to take drugs. Based on the decision rules, the patients can be partitioned into four potential groups for use of erectile dysfunction: high potential group, intermediate potential-1 group, intermediate potential-2 group and low potential group. Experimental results show 1) the sexual behavior factors, erectile hardness and time length to prepare (how long to prepares for sexual behaviors ahead of time), have bigger impacts both in correlation analysis and potential drug taking patients discovering; 2) odds ratio between patients identified as low potential and high potential was 6.098 (95% confidence interval, 5.159-7.209) with statistically significant differences in taking drug potential detected between all potential groups.

  15. Representativeness and optimal use of body mass index (BMI) in the UK Clinical Practice Research Datalink (CPRD)

    PubMed Central

    Bhaskaran, Krishnan; Forbes, Harriet J; Douglas, Ian; Leon, David A; Smeeth, Liam

    2013-01-01

    Objectives To assess the completeness and representativeness of body mass index (BMI) data in the Clinical Practice Research Datalink (CPRD), and determine an optimal strategy for their use. Design Descriptive study. Setting Electronic healthcare records from primary care. Participants A million patient random sample from the UK CPRD primary care database, aged ≥16 years. Primary and secondary outcome measures BMI completeness in CPRD was evaluated by age, sex and calendar period. CPRD-based summary BMI statistics for each calendar year (2003–2010) were age-standardised and sex-standardised and compared with equivalent statistics from the Health Survey for England (HSE). Results BMI completeness increased over calendar time from 37% in 1990–1994 to 77% in 2005–2011, was higher among females and increased with age. When BMI at specific time points was assigned based on the most recent record, calendar–year-specific mean BMI statistics underestimated equivalent HSE statistics by 0.75–1.1 kg/m2. Restriction to those with a recent (≤3 years) BMI resulted in mean BMI estimates closer to HSE (≤0.28 kg/m2 underestimation), but excluded up to 47% of patients. An alternative strategy of imputing up-to-date BMI based on modelled changes in BMI over time since the last available record also led to mean BMI estimates that were close to HSE (≤0.37 kg/m2 underestimation). Conclusions Completeness of BMI in CPRD increased over time and varied by age and sex. At a given point in time, a large proportion of the most recent BMIs are unlikely to reflect current BMI; consequent BMI misclassification might be reduced by employing model-based imputation of current BMI. PMID:24038008

  16. Estimation of trabecular bone parameters in children from multisequence MRI using texture-based regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lekadir, Karim, E-mail: karim.lekadir@upf.edu; Hoogendoorn, Corné; Armitage, Paul

    Purpose: This paper presents a statistical approach for the prediction of trabecular bone parameters from low-resolution multisequence magnetic resonance imaging (MRI) in children, thus addressing the limitations of high-resolution modalities such as HR-pQCT, including the significant exposure of young patients to radiation and the limited applicability of such modalities to peripheral bones in vivo. Methods: A statistical predictive model is constructed from a database of MRI and HR-pQCT datasets, to relate the low-resolution MRI appearance in the cancellous bone to the trabecular parameters extracted from the high-resolution images. The description of the MRI appearance is achieved between subjects by usingmore » a collection of feature descriptors, which describe the texture properties inside the cancellous bone, and which are invariant to the geometry and size of the trabecular areas. The predictive model is built by fitting to the training data a nonlinear partial least square regression between the input MRI features and the output trabecular parameters. Results: Detailed validation based on a sample of 96 datasets shows correlations >0.7 between the trabecular parameters predicted from low-resolution multisequence MRI based on the proposed statistical model and the values extracted from high-resolution HRp-QCT. Conclusions: The obtained results indicate the promise of the proposed predictive technique for the estimation of trabecular parameters in children from multisequence MRI, thus reducing the need for high-resolution radiation-based scans for a fragile population that is under development and growth.« less

  17. IT Control Deficiencies That Affect the Financial Reporting of Companies since the Enactment of the Sarbanes Oxley Act

    ERIC Educational Resources Information Center

    Harper, Roosevelt

    2014-01-01

    This research study examined the specific categories of IT control deficiencies and their related effects on financial reporting. The approach to this study was considered non-experimental, an approach sometimes called descriptive. Descriptive statistics are used to describe the basic features of the data in a study, providing simple summaries…

  18. Student Success: A Descriptive Analysis of Hispanic Students and Engagement at a Midwest Hispanic-Serving Institution

    ERIC Educational Resources Information Center

    Mercado, Claudia

    2012-01-01

    The purpose of this study was to learn more about the Hispanic students attending Northeastern Illinois University, a four-year institution in Chicago, IL, and their student success. Little is known descriptively and statistically about this population at NEIU, which serves as a Hispanic-Serving Institution. In addition, little is known about…

  19. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects

    ERIC Educational Resources Information Center

    Ho, Andrew D.; Yu, Carol C.

    2015-01-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological…

  20. An Analysis of Research Trends in Dissertations and Theses Studying Blended Learning

    ERIC Educational Resources Information Center

    Drysdale, Jeffery S.; Graham, Charles R.; Spring, Kristian J.; Halverson, Lisa R.

    2013-01-01

    This article analyzes the research of 205 doctoral dissertations and masters' theses in the domain of blended learning. A summary of trends regarding the growth and context of blended learning research is presented. Methodological trends are described in terms of qualitative, inferential statistics, descriptive statistics, and combined approaches…

  1. In Search of the Most Likely Value

    ERIC Educational Resources Information Center

    Letkowski, Jerzy

    2014-01-01

    Descripting Statistics provides methodology and tools for user-friendly presentation of random data. Among the summary measures that describe focal tendencies in random data, the mode is given the least amount of attention and it is frequently misinterpreted in many introductory textbooks on statistics. The purpose of the paper is to provide a…

  2. [Artificial neural networks for decision making in urologic oncology].

    PubMed

    Remzi, M; Djavan, B

    2007-06-01

    This chapter presents a detailed introduction regarding Artificial Neural Networks (ANNs) and their contribution to modern Urologic Oncology. It includes a description of ANNs methodology and points out the differences between Artifical Intelligence and traditional statistic models in terms of usefulness for patients and clinicians, and its advantages over current statistical analysis.

  3. Financial Statistics, 1980-81. Our Colleges and Universities Today. Volume XIX, Number 8.

    ERIC Educational Resources Information Center

    Hottinger, Gerald W.

    Financial statistics for Pennsylvania colleges and universities for the fiscal year (FY) ending 1981, for 1971-1972 through 1980-1981, and for 1977-1978 through 1980-1981 are presented, along with narrative descriptions of financial trends at the institutions. Information includes the following: current-funds revenues by institutional control;…

  4. Using the Descriptive Bootstrap to Evaluate Result Replicability (Because Statistical Significance Doesn't)

    ERIC Educational Resources Information Center

    Spinella, Sarah

    2011-01-01

    As result replicability is essential to science and difficult to achieve through external replicability, the present paper notes the insufficiency of null hypothesis statistical significance testing (NHSST) and explains the bootstrap as a plausible alternative, with a heuristic example to illustrate the bootstrap method. The bootstrap relies on…

  5. Difficulties in Learning and Teaching Statistics: Teacher Views

    ERIC Educational Resources Information Center

    Koparan, Timur

    2015-01-01

    The purpose of this study is to define teacher views about the difficulties in learning and teaching middle school statistics subjects. To serve this aim, a number of interviews were conducted with 10 middle school maths teachers in 2011-2012 school year in the province of Trabzon. Of the qualitative descriptive research methods, the…

  6. Seed Dispersal Near and Far: Patterns Across Temperate and Tropical Forests

    Treesearch

    James S. Clark; Miles Silman; Ruth Kern; Eric Macklin; Janneke HilleRisLambers

    1999-01-01

    Dispersal affects community dynamics and vegetation response to global change. Understanding these effects requires descriptions of dispersal at local and regional scales and statistical models that permit estimation. Classical models of dispersal describe local or long-distance dispersal, but not both. The lack of statistical methods means that models have rarely been...

  7. Damned Lies. And Statistics. Otto Neurath and Soviet Propaganda in the 1930s.

    ERIC Educational Resources Information Center

    Chizlett, Clive

    1992-01-01

    Examines the philosophical and historical context in which Otto Neurath (1882-1945) worked. Examines critically (in the light of descriptive statistics) the principles of his Isotype Picture Language. Tests Neurath's personal credibility and scientific integrity by looking at his contributions to Soviet propaganda in the early 1930s. (SR)

  8. Forest statistics for the upper Koyukuk River, Alaska, 1971.

    Treesearch

    Karl M. Hegg

    1974-01-01

    Area and volume statistics from the first intensive forest inventory of the upper Koyukuk River drainage, in north-central Alaska, are given. Observations are made on forest location, description, defect, regeneration, growth, and mortality. Commercial forests, although generally restricted to a narrow band along drainages, were found as far as 70 miles (113 kilometers...

  9. Statistical description of tectonic motions

    NASA Technical Reports Server (NTRS)

    Agnew, Duncan Carr

    1993-01-01

    This report summarizes investigations regarding tectonic motions. The topics discussed include statistics of crustal deformation, Earth rotation studies, using multitaper spectrum analysis techniques applied to both space-geodetic data and conventional astrometric estimates of the Earth's polar motion, and the development, design, and installation of high-stability geodetic monuments for use with the global positioning system.

  10. Education Statistics Quarterly. Volume 4 Issue 4, 2002.

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2002

    2002-01-01

    This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released in a 3-month period. Each issue also contains a message from the NCES on a timely…

  11. Critical discussion of evaluation parameters for inter-observer variability in target definition for radiation therapy.

    PubMed

    Fotina, I; Lütgendorf-Caucig, C; Stock, M; Pötter, R; Georg, D

    2012-02-01

    Inter-observer studies represent a valid method for the evaluation of target definition uncertainties and contouring guidelines. However, data from the literature do not yet give clear guidelines for reporting contouring variability. Thus, the purpose of this work was to compare and discuss various methods to determine variability on the basis of clinical cases and a literature review. In this study, 7 prostate and 8 lung cases were contoured on CT images by 8 experienced observers. Analysis of variability included descriptive statistics, calculation of overlap measures, and statistical measures of agreement. Cross tables with ratios and correlations were established for overlap parameters. It was shown that the minimal set of parameters to be reported should include at least one of three volume overlap measures (i.e., generalized conformity index, Jaccard coefficient, or conformation number). High correlation between these parameters and scatter of the results was observed. A combination of descriptive statistics, overlap measure, and statistical measure of agreement or reliability analysis is required to fully report the interrater variability in delineation.

  12. Descriptive study of perioperative analgesic medications associated with general anesthesia for dental rehabilitation of children.

    PubMed

    Carter, Laura; Wilson, Stephen; Tumer, Erwin G

    2010-01-01

    The purpose of this retrospective chart review was to document sedation and analgesic medications administered preoperotively, intraoperatively, and during postanesthesia care for children undergoing dental rehabilitation using general anesthesia (GA). Patient gender, age, procedure type performed, and ASA status were recorded from the medical charts of children undergoing GA for dental rehabilitation. The sedative and analgesic drugs administered pre-, intra-, and postoperatively were recorded. Statistical analysis included descriptive statistics and cross-tabulation. A sample of 115 patients with a mean age of 64 (+/-30) months was studied; 47% were females, and 71% were healthy. Over 80% of the patients were administered medications primarily during pre- and intraoperative phases, with fewer than 25% receiving medications postoperatively. Morphine and fentanyl were the most frequently administered agents intraoperatively. The procedure type, gender, and health status were not statistically associated with the number of agents administered. Younger patients, however, were statistically more likely to receive additional analgesic medications. Our study suggests that a minority of patients have postoperative discomfort in the postanesthesia care unit; mild to moderate analgesics were administered during intraoperative phases of dental rehabilitation.

  13. The Thurgood Marshall School of Law Empirical Findings: A Report of the 2012 Friday Academy Attendance and Statistical Comparisons of 1L GPA (Predicted and Actual)

    ERIC Educational Resources Information Center

    Kadhi, T.; Rudley, D.; Holley, D.; Krishna, K.; Ogolla, C.; Rene, E.; Green, T.

    2010-01-01

    The following report of descriptive statistics addresses the attendance of the 2012 class and the average Actual and Predicted 1L Grade Point Averages (GPAs). Correlational and Inferential statistics are also run on the variables of Attendance (Y/N), Attendance Number of Times, Actual GPA, and Predictive GPA (Predictive GPA is defined as the Index…

  14. An HMM-based algorithm for evaluating rates of receptor–ligand binding kinetics from thermal fluctuation data

    PubMed Central

    Ju, Lining; Wang, Yijie Dylan; Hung, Ying; Wu, Chien-Fu Jeff; Zhu, Cheng

    2013-01-01

    Motivation: Abrupt reduction/resumption of thermal fluctuations of a force probe has been used to identify association/dissociation events of protein–ligand bonds. We show that off-rate of molecular dissociation can be estimated by the analysis of the bond lifetime, while the on-rate of molecular association can be estimated by the analysis of the waiting time between two neighboring bond events. However, the analysis relies heavily on subjective judgments and is time-consuming. To automate the process of mapping out bond events from thermal fluctuation data, we develop a hidden Markov model (HMM)-based method. Results: The HMM method represents the bond state by a hidden variable with two values: bound and unbound. The bond association/dissociation is visualized and pinpointed. We apply the method to analyze a key receptor–ligand interaction in the early stage of hemostasis and thrombosis: the von Willebrand factor (VWF) binding to platelet glycoprotein Ibα (GPIbα). The numbers of bond lifetime and waiting time events estimated by the HMM are much more than those estimated by a descriptive statistical method from the same set of raw data. The kinetic parameters estimated by the HMM are in excellent agreement with those by a descriptive statistical analysis, but have much smaller errors for both wild-type and two mutant VWF-A1 domains. Thus, the computerized analysis allows us to speed up the analysis and improve the quality of estimates of receptor–ligand binding kinetics. Contact: jeffwu@isye.gatech.edu or cheng.zhu@bme.gatech.edu PMID:23599504

  15. Variations of attractors and wavelet spectra of the immunofluorescence distributions for women in the pregnant period

    NASA Astrophysics Data System (ADS)

    Galich, Nikolay E.

    2008-07-01

    Communication contains the description of the immunology data treatment. New nonlinear methods of immunofluorescence statistical analysis of peripheral blood neutrophils have been developed. We used technology of respiratory burst reaction of DNA fluorescence in the neutrophils cells nuclei due to oxidative activity. The histograms of photon count statistics the radiant neutrophils populations' in flow cytometry experiments are considered. Distributions of the fluorescence flashes frequency as functions of the fluorescence intensity are analyzed. Statistic peculiarities of histograms set for women in the pregnant period allow dividing all histograms on the three classes. The classification is based on three different types of smoothing and long-range scale averaged immunofluorescence distributions, their bifurcation and wavelet spectra. Heterogeneity peculiarities of long-range scale immunofluorescence distributions and peculiarities of wavelet spectra allow dividing all histograms on three groups. First histograms group belongs to healthy donors. Two other groups belong to donors with autoimmune and inflammatory diseases. Some of the illnesses are not diagnosed by standards biochemical methods. Medical standards and statistical data of the immunofluorescence histograms for identifications of health and illnesses are interconnected. Peculiarities of immunofluorescence for women in pregnant period are classified. Health or illness criteria are connected with statistics features of immunofluorescence histograms. Neutrophils populations' fluorescence presents the sensitive clear indicator of health status.

  16. Generalized statistical mechanics approaches to earthquakes and tectonics.

    PubMed

    Vallianatos, Filippos; Papadakis, Giorgos; Michas, Georgios

    2016-12-01

    Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes.

  17. Generalized statistical mechanics approaches to earthquakes and tectonics

    PubMed Central

    Papadakis, Giorgos; Michas, Georgios

    2016-01-01

    Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes. PMID:28119548

  18. Information on 'Overdiagnosis' in Breast Cancer Screening on Prominent United Kingdom- and Australia-Oriented Health Websites.

    PubMed

    Ghanouni, Alex; Meisel, Susanne F; Hersch, Jolyn; Waller, Jo; Wardle, Jane; Renzi, Cristina

    2016-01-01

    Health-related websites are an important source of information for the public. Increasing public awareness of overdiagnosis and ductal carcinoma in situ (DCIS) in breast cancer screening may facilitate more informed decision-making. This study assessed the extent to which such information was included on prominent health websites oriented towards the general public, and evaluated how it was explained. Cross-sectional study. Websites identified through Google searches in England (United Kingdom) and New South Wales (Australia) for "breast cancer screening" and further websites included based on our prior knowledge of relevant organisations. Content analysis was used to determine whether information on overdiagnosis or DCIS existed on each site, how the concepts were described, and what statistics were used to quantify overdiagnosis. After exclusions, ten UK websites and eight Australian websites were considered relevant and evaluated. They originated from charities, health service providers, government agencies, and an independent health organisation. Most contained some information on overdiagnosis (and/or DCIS). Descriptive information was similar across websites. Among UK websites, statistical information was often based on estimates from the Independent UK Panel on Breast Cancer Screening; the most commonly provided statistic was the ratio of breast cancer deaths prevented to overdiagnosed cases (1:3). A range of other statistics was included, such as the yearly number of overdiagnosed cases and the proportion of women screened who would be overdiagnosed. Information on DCIS and statistical information was less common on the Australian websites. Online information about overdiagnosis has become more widely available in 2015-16 compared with the limited accessibility indicated by older research. However, there may be scope to offer more information on DCIS and overdiagnosis statistics on Australian websites. Moreover, the variability in how estimates are presented across UK websites may be confusing for the general public.

  19. The Norwegian preeclampsia family cohort study: a new resource for investigating genetic aspects and heritability of preeclampsia and related phenotypes.

    PubMed

    Roten, Linda Tømmerdal; Thomsen, Liv Cecilie Vestrheim; Gundersen, Astrid Solberg; Fenstad, Mona Høysæter; Odland, Maria Lisa; Strand, Kristin Melheim; Solberg, Per; Tappert, Christian; Araya, Elisabeth; Bærheim, Gunhild; Lyslo, Ingvill; Tollaksen, Kjersti; Bjørge, Line; Austgulen, Rigmor

    2015-12-01

    Preeclampsia is a major pregnancy complication without curative treatment available. A Norwegian Preeclampsia Family Cohort was established to provide a new resource for genetic and molecular studies aiming to improve the understanding of the complex pathophysiology of preeclampsia. Participants were recruited from five Norwegian hospitals after diagnoses of preeclampsia registered in the Medical birth registry of Norway were verified according to the study's inclusion criteria. Detailed obstetric information and information on personal and family disease history focusing on cardiovascular health was collected. At attendance anthropometric measurements were registered and blood samples were drawn. The software package SPSS 19.0 for Windows was used to compute descriptive statistics such as mean and SD. P-values were computed based on t-test statistics for normally distributed variables. Nonparametrical methods (chi square) were used for categorical variables. A cohort consisting of 496 participants (355 females and 141 males) representing 137 families with increased occurrence of preeclampsia has been established, and blood samples are available for 477 participants. Descriptive analyses showed that about 60% of the index women's pregnancies with birth data registered were preeclamptic according to modern diagnosis criteria. We also found that about 41% of the index women experienced more than one preeclamptic pregnancy. In addition, the descriptive analyses confirmed that preeclamptic pregnancies are more often accompanied with delivery complications. The data and biological samples collected in this Norwegian Preeclampsia Family Cohort will provide an important basis for future research. Identification of preeclampsia susceptibility genes and new biomarkers may contribute to more efficient strategies to identify mothers "at risk" and contribute to development of novel preventative therapies.

  20. Main Geomagnetic Field Models from Oersted and Magsat Data Via a Rigorous General Inverse Theory with Error Bounds

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1999-01-01

    The purpose of the grant was to study how prior information about the geomagnetic field can be used to interpret surface and satellite magnetic measurements, to generate quantitative descriptions of prior information that might be so used, and to use this prior information to obtain from satellite data a model of the core field with statistically justifiable error estimates. The need for prior information in geophysical inversion has long been recognized. Data sets are finite, and faithful descriptions of aspects of the earth almost always require infinite-dimensional model spaces. By themselves, the data can confine the correct earth model only to an infinite-dimensional subset of the model space. Earth properties other than direct functions of the observed data cannot be estimated from those data without prior information about the earth. Prior information is based on what the observer already knows before the data become available. Such information can be "hard" or "soft". Hard information is a belief that the real earth must lie in some known region of model space. For example, the total ohmic dissipation in the core is probably less that the total observed geothermal heat flow out of the earth's surface. (In principle, ohmic heat in the core can be recaptured to help drive the dynamo, but this effect is probably small.) "Soft" information is a probability distribution on the model space, a distribution that the observer accepts as a quantitative description of her/his beliefs about the earth. The probability distribution can be a subjective prior in the sense of Bayes or the objective result of a statistical study of previous data or relevant theories.

  1. Estimating chronic disease rates in Canada: which population-wide denominator to use?

    PubMed

    Ellison, J; Nagamuthu, C; Vanderloo, S; McRae, B; Waters, C

    2016-10-01

    Chronic disease rates are produced from the Public Health Agency of Canada's Canadian Chronic Disease Surveillance System (CCDSS) using administrative health data from provincial/territorial health ministries. Denominators for these rates are based on estimates of populations derived from health insurance files. However, these data may not be accessible to all researchers. Another source for population size estimates is the Statistics Canada census. The purpose of our study was to calculate the major differences between the CCDSS and Statistics Canada's population denominators and to identify the sources or reasons for the potential differences between these data sources. We compared the 2009 denominators from the CCDSS and Statistics Canada. The CCDSS denominator was adjusted for the growth components (births, deaths, emigration and immigration) from Statistics Canada's census data. The unadjusted CCDSS denominator was 34 429 804, 3.2% higher than Statistics Canada's estimate of population in 2009. After the CCDSS denominator was adjusted for the growth components, the difference between the two estimates was reduced to 431 323 people, a difference of 1.3%. The CCDSS overestimates the population relative to Statistics Canada overall. The largest difference between the two estimates was from the migrant growth component, while the smallest was from the emigrant component. By using data descriptions by data source, researchers can make decisions about which population to use in their calculations of disease frequency.

  2. Delay, change and bifurcation of the immunofluorescence distribution attractors in health statuses diagnostics and in medical treatment

    NASA Astrophysics Data System (ADS)

    Galich, Nikolay E.; Filatov, Michael V.

    2008-07-01

    Communication contains the description of the immunology experiments and the experimental data treatment. New nonlinear methods of immunofluorescence statistical analysis of peripheral blood neutrophils have been developed. We used technology of respiratory burst reaction of DNA fluorescence in the neutrophils cells nuclei due to oxidative activity. The histograms of photon count statistics the radiant neutrophils populations' in flow cytometry experiments are considered. Distributions of the fluorescence flashes frequency as functions of the fluorescence intensity are analyzed. Statistic peculiarities of histograms set for healthy and unhealthy donors allow dividing all histograms on the three classes. The classification is based on three different types of smoothing and long-range scale averaged immunofluorescence distributions and their bifurcation. Heterogeneity peculiarities of long-range scale immunofluorescence distributions allow dividing all histograms on three groups. First histograms group belongs to healthy donors. Two other groups belong to donors with autoimmune and inflammatory diseases. Some of the illnesses are not diagnosed by standards biochemical methods. Medical standards and statistical data of the immunofluorescence histograms for identifications of health and illnesses are interconnected. Possibilities and alterations of immunofluorescence statistics in registration, diagnostics and monitoring of different diseases in various medical treatments have been demonstrated. Health or illness criteria are connected with statistics features of immunofluorescence histograms. Neutrophils populations' fluorescence presents the sensitive clear indicator of health status.

  3. SU-E-J-261: Statistical Analysis and Chaotic Dynamics of Respiratory Signal of Patients in BodyFix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michalski, D; Huq, M; Bednarz, G

    Purpose: To quantify respiratory signal of patients in BodyFix undergoing 4DCT scan with and without immobilization cover. Methods: 20 pairs of respiratory tracks recorded with RPM system during 4DCT scan were analyzed. Descriptive statistic was applied to selected parameters of exhale-inhale decomposition. Standardized signals were used with the delay method to build orbits in embedded space. Nonlinear behavior was tested with surrogate data. Sample entropy SE, Lempel-Ziv complexity LZC and the largest Lyapunov exponents LLE were compared. Results: Statistical tests show difference between scans for inspiration time and its variability, which is bigger for scans without cover. The same ismore » for variability of the end of exhalation and inhalation. Other parameters fail to show the difference. For both scans respiratory signals show determinism and nonlinear stationarity. Statistical test on surrogate data reveals their nonlinearity. LLEs show signals chaotic nature and its correlation with breathing period and its embedding delay time. SE, LZC and LLE measure respiratory signal complexity. Nonlinear characteristics do not differ between scans. Conclusion: Contrary to expectation cover applied to patients in BodyFix appears to have limited effect on signal parameters. Analysis based on trajectories of delay vectors shows respiratory system nonlinear character and its sensitive dependence on initial conditions. Reproducibility of respiratory signal can be evaluated with measures of signal complexity and its predictability window. Longer respiratory period is conducive for signal reproducibility as shown by these gauges. Statistical independence of the exhale and inhale times is also supported by the magnitude of LLE. The nonlinear parameters seem more appropriate to gauge respiratory signal complexity since its deterministic chaotic nature. It contrasts with measures based on harmonic analysis that are blind for nonlinear features. Dynamics of breathing, so crucial for 4D-based clinical technologies, can be better controlled if nonlinear-based methodology, which reflects respiration characteristic, is applied. Funding provided by Varian Medical Systems via Investigator Initiated Research Project.« less

  4. A web-based application for initial screening of living kidney donors: development, implementation and evaluation.

    PubMed

    Moore, D R; Feurer, I D; Zavala, E Y; Shaffer, D; Karp, S; Hoy, H; Moore, D E

    2013-02-01

    Most centers utilize phone or written surveys to screen candidates who self-refer to be living kidney donors. To increase efficiency and reduce resource utilization, we developed a web-based application to screen kidney donor candidates. The aim of this study was to evaluate the use of this web-based application. Method and time of referral were tabulated and descriptive statistics summarized demographic characteristics. Time series analyses evaluated use over time. Between January 1, 2011 and March 31, 2012, 1200 candidates self-referred to be living kidney donors at our center. Eight hundred one candidates (67%) completed the web-based survey and 399 (33%) completed a phone survey. Thirty-nine percent of donors accessed the application on nights and weekends. Postimplementation of the web-based application, there was a statistically significant increase (p < 0.001) in the number of self-referrals via the web-based application as opposed to telephone contact. Also, there was a significant increase (p = 0.025) in the total number of self-referrals post-implementation from 61 to 116 per month. An interactive web-based application is an effective strategy for the initial screening of donor candidates. The web-based application increased the ability to interface with donors, process them efficiently and ultimately increased donor self-referral at our center. © Copyright 2012 The American Society of Transplantation and the American Society of Transplant Surgeons.

  5. Statistics of high-level scene context

    PubMed Central

    Greene, Michelle R.

    2013-01-01

    Context is critical for recognizing environments and for searching for objects within them: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed “things” in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics rather than intuition. PMID:24194723

  6. Characterizing microstructural features of biomedical samples by statistical analysis of Mueller matrix images

    NASA Astrophysics Data System (ADS)

    He, Honghui; Dong, Yang; Zhou, Jialing; Ma, Hui

    2017-03-01

    As one of the salient features of light, polarization contains abundant structural and optical information of media. Recently, as a comprehensive description of polarization property, the Mueller matrix polarimetry has been applied to various biomedical studies such as cancerous tissues detections. In previous works, it has been found that the structural information encoded in the 2D Mueller matrix images can be presented by other transformed parameters with more explicit relationship to certain microstructural features. In this paper, we present a statistical analyzing method to transform the 2D Mueller matrix images into frequency distribution histograms (FDHs) and their central moments to reveal the dominant structural features of samples quantitatively. The experimental results of porcine heart, intestine, stomach, and liver tissues demonstrate that the transformation parameters and central moments based on the statistical analysis of Mueller matrix elements have simple relationships to the dominant microstructural properties of biomedical samples, including the density and orientation of fibrous structures, the depolarization power, diattenuation and absorption abilities. It is shown in this paper that the statistical analysis of 2D images of Mueller matrix elements may provide quantitative or semi-quantitative criteria for biomedical diagnosis.

  7. Random-phase metasurfaces at optical wavelengths

    NASA Astrophysics Data System (ADS)

    Pors, Anders; Ding, Fei; Chen, Yiting; Radko, Ilya P.; Bozhevolnyi, Sergey I.

    2016-06-01

    Random-phase metasurfaces, in which the constituents scatter light with random phases, have the property that an incident plane wave will diffusely scatter, hereby leading to a complex far-field response that is most suitably described by statistical means. In this work, we present and exemplify the statistical description of the far-field response, particularly highlighting how the response for polarised and unpolarised light might be alike or different depending on the correlation of scattering phases for two orthogonal polarisations. By utilizing gap plasmon-based metasurfaces, consisting of an optically thick gold film overlaid by a subwavelength thin glass spacer and an array of gold nanobricks, we design and realize random-phase metasurfaces at a wavelength of 800 nm. Optical characterisation of the fabricated samples convincingly demonstrates the diffuse scattering of reflected light, with statistics obeying the theoretical predictions. We foresee the use of random-phase metasurfaces for camouflage applications and as high-quality reference structures in dark-field microscopy, while the control of the statistics for polarised and unpolarised light might find usage in security applications. Finally, by incorporating a certain correlation between scattering by neighbouring metasurface constituents new types of functionalities can be realised, such as a Lambertian reflector.

  8. Quantitative methods used in Australian health promotion research: a review of publications from 1992-2002.

    PubMed

    Smith, Ben J; Zehle, Katharina; Bauman, Adrian E; Chau, Josephine; Hawkshaw, Barbara; Frost, Steven; Thomas, Margaret

    2006-04-01

    This study examined the use of quantitative methods in Australian health promotion research in order to identify methodological trends and priorities for strengthening the evidence base for health promotion. Australian health promotion articles were identified by hand searching publications from 1992-2002 in six journals: Health Promotion Journal of Australia, Australian and New Zealand journal of Public Health, Health Promotion International, Health Education Research, Health Education and Behavior and the American Journal of Health Promotion. The study designs and statistical methods used in articles presenting quantitative research were recorded. 591 (57.7%) of the 1,025 articles used quantitative methods. Cross-sectional designs were used in the majority (54.3%) of studies with pre- and post-test (14.6%) and post-test only (9.5%) the next most common designs. Bivariate statistical methods were used in 45.9% of papers, multivariate methods in 27.1% and simple numbers and proportions in 25.4%. Few studies used higher-level statistical techniques. While most studies used quantitative methods, the majority were descriptive in nature. The study designs and statistical methods used provided limited scope for demonstrating intervention effects or understanding the determinants of change.

  9. Catalog of earthquake hypocenters at Alaskan volcanoes: January 1 through December 31, 2006

    USGS Publications Warehouse

    Dixon, James P.; Stihler, Scott D.; Power, John A.; Searcy, Cheryl

    2008-01-01

    Between January 1 and December 31, 2006, AVO located 8,666 earthquakes of which 7,783 occurred on or near the 33 volcanoes monitored within Alaska. Monitoring highlights in 2006 include: an eruption of Augustine Volcano, a volcanic-tectonic earthquake swarm at Mount Martin, elevated seismicity and volcanic unrest at Fourpeaked Mountain, and elevated seismicity and low-level tremor at Mount Veniaminof and Korovin Volcano. A new seismic subnetwork was installed on Fourpeaked Mountain. This catalog includes: (1) descriptions and locations of seismic instrumentation deployed in the field during 2006, (2) a description of earthquake detection, recording, analysis, and data archival systems, (3) a description of seismic velocity models used for earthquake locations, (4) a summary of earthquakes located in 2006, and (5) an accompanying UNIX tar-file with a summary of earthquake origin times, hypocenters, magnitudes, phase arrival times, location quality statistics, daily station usage statistics, and all files used to determine the earthquake locations in 2006.

  10. Cluster categorization of urban roads to optimize their noise monitoring.

    PubMed

    Zambon, G; Benocci, R; Brambilla, G

    2016-01-01

    Road traffic in urban areas is recognized to be associated with urban mobility and public health, and it is often the main source of noise pollution. Lately, noise maps have been considered a powerful tool to estimate the population exposure to environmental noise, but they need to be validated by measured noise data. The project Dynamic Acoustic Mapping (DYNAMAP), co-funded in the framework of the LIFE 2013 program, is aimed to develop a statistically based method to optimize the choice and the number of monitoring sites and to automate the noise mapping update using the data retrieved from a low-cost monitoring network. Indeed, the first objective should improve the spatial sampling based on the legislative road classification, as this classification is mainly based on the geometrical characteristics of the road, rather than its noise emission. The present paper describes the statistical approach of the methodology under development and the results of its preliminary application to a limited sample of roads in the city of Milan. The resulting categorization of roads, based on clustering the 24-h hourly L Aeqh, looks promising to optimize the spatial sampling of noise monitoring toward a description of the noise pollution due to complex urban road networks more efficient than that based on the legislative road classification.

  11. Analysis of laparoscopic port site complications: A descriptive study

    PubMed Central

    Karthik, Somu; Augustine, Alfred Joseph; Shibumon, Mundunadackal Madhavan; Pai, Manohar Varadaraya

    2013-01-01

    CONTEXT: The rate of port site complications following conventional laparoscopic surgery is about 21 per 100,000 cases. It has shown a proportional rise with increase in the size of the port site incision and trocar. Although rare, complications that occur at the port site include infection, bleeding, and port site hernia. AIMS: To determine the morbidity associated with ports at the site of their insertion in laparoscopic surgery and to identify risk factors for complications. SETTINGS AND DESIGN: Prospective descriptive study. MATERIALS AND METHODS: In the present descriptive study, a total of 570 patients who underwent laparoscopic surgeries for various ailments between August 2009 and July 2011 at our institute were observed for port site complications prospectively and the complications were reviewed. STATISTICAL ANALYSIS USED: Descriptive statistical analysis was carried out in the present study. The statistical software, namely, SPSS 15.0 was used for the analysis of the data. RESULTS: Of the 570 patients undergoing laparoscopic surgery, 17 (3%) had developed complications specifically related to the port site during a minimum follow-up of three months; port site infection (PSI) was the most frequent (n = 10, 1.8%), followed by port site bleeding (n = 4, 0.7%), omentum-related complications (n = 2; 0.35%), and port site metastasis (n = 1, 0.175%). CONCLUSIONS: Laparoscopic surgeries are associated with minimal port site complications. Complications are related to the increased number of ports. Umbilical port involvement is the commonest. Most complications are manageable with minimal morbidity, and can be further minimized with meticulous surgical technique during entry and exit. PMID:23741110

  12. Class size as related to the use of technology, educational practices, and outcomes in Web-based nursing courses.

    PubMed

    Burruss, Nancy M; Billings, Diane M; Brownrigg, Vicki; Skiba, Diane J; Connors, Helen R

    2009-01-01

    With the expanding numbers of nursing students enrolled in Web-based courses and the shortage of faculty, class sizes are increasing. This exploratory descriptive study examined class size in relation to the use of technology and to particular educational practices and outcomes. The sample consisted of undergraduate (n = 265) and graduate (n = 863) students enrolled in fully Web-based nursing courses. The Evaluating Educational Uses of Web-based Courses in Nursing survey (Billings, D., Connors, H., Skiba, D. (2001). Benchmarking best practices in Web-based nursing courses. Advances in Nursing Science, 23, 41--52) and the Social Presence Scale (Gunawardena, C. N., Zittle, F. J. (1997). Social presence as a predictor of satisfaction within a computer-mediated conferencing environment. The American Journal of Distance Education, 11, 9-26.) were used to gather data about the study variables. Class sizes were defined as very small (1 to 10 students), small (11 to 20 students), medium (21 to 30 students), large (31 to 40 students), and very large (41 students and above). Descriptive and inferential statistics were used to analyze the data. There were significant differences by class size in students' perceptions of active participation in learning, student-faculty interaction, peer interaction, and connectedness. Some differences by class size between undergraduate and graduate students were also found, and these require further study.

  13. Interindividual registration and dose mapping for voxelwise population analysis of rectal toxicity in prostate cancer radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dréan, Gaël; Acosta, Oscar, E-mail: Oscar.Acosta@univ-rennes1.fr; Simon, Antoine

    2016-06-15

    Purpose: Recent studies revealed a trend toward voxelwise population analysis in order to understand the local dose/toxicity relationships in prostate cancer radiotherapy. Such approaches require, however, an accurate interindividual mapping of the anatomies and 3D dose distributions toward a common coordinate system. This step is challenging due to the high interindividual variability. In this paper, the authors propose a method designed for interindividual nonrigid registration of the rectum and dose mapping for population analysis. Methods: The method is based on the computation of a normalized structural description of the rectum using a Laplacian-based model. This description takes advantage of themore » tubular structure of the rectum and its centerline to be embedded in a nonrigid registration-based scheme. The performances of the method were evaluated on 30 individuals treated for prostate cancer in a leave-one-out cross validation. Results: Performance was measured using classical metrics (Dice score and Hausdorff distance), along with new metrics devised to better assess dose mapping in relation with structural deformation (dose-organ overlap). Considering these scores, the proposed method outperforms intensity-based and distance maps-based registration methods. Conclusions: The proposed method allows for accurately mapping interindividual 3D dose distributions toward a single anatomical template, opening the way for further voxelwise statistical analysis.« less

  14. Quantum theory of multiscale coarse-graining.

    PubMed

    Han, Yining; Jin, Jaehyeok; Wagner, Jacob W; Voth, Gregory A

    2018-03-14

    Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.

  15. Analysis of spontaneous MEG activity in mild cognitive impairment and Alzheimer's disease using spectral entropies and statistical complexity measures

    NASA Astrophysics Data System (ADS)

    Bruña, Ricardo; Poza, Jesús; Gómez, Carlos; García, María; Fernández, Alberto; Hornero, Roberto

    2012-06-01

    Alzheimer's disease (AD) is the most common cause of dementia. Over the last few years, a considerable effort has been devoted to exploring new biomarkers. Nevertheless, a better understanding of brain dynamics is still required to optimize therapeutic strategies. In this regard, the characterization of mild cognitive impairment (MCI) is crucial, due to the high conversion rate from MCI to AD. However, only a few studies have focused on the analysis of magnetoencephalographic (MEG) rhythms to characterize AD and MCI. In this study, we assess the ability of several parameters derived from information theory to describe spontaneous MEG activity from 36 AD patients, 18 MCI subjects and 26 controls. Three entropies (Shannon, Tsallis and Rényi entropies), one disequilibrium measure (based on Euclidean distance ED) and three statistical complexities (based on Lopez Ruiz-Mancini-Calbet complexity LMC) were used to estimate the irregularity and statistical complexity of MEG activity. Statistically significant differences between AD patients and controls were obtained with all parameters (p < 0.01). In addition, statistically significant differences between MCI subjects and controls were achieved by ED and LMC (p < 0.05). In order to assess the diagnostic ability of the parameters, a linear discriminant analysis with a leave-one-out cross-validation procedure was applied. The accuracies reached 83.9% and 65.9% to discriminate AD and MCI subjects from controls, respectively. Our findings suggest that MCI subjects exhibit an intermediate pattern of abnormalities between normal aging and AD. Furthermore, the proposed parameters provide a new description of brain dynamics in AD and MCI.

  16. Methodological reporting of randomized trials in five leading Chinese nursing journals.

    PubMed

    Shi, Chunhu; Tian, Jinhui; Ren, Dan; Wei, Hongli; Zhang, Lihuan; Wang, Quan; Yang, Kehu

    2014-01-01

    Randomized controlled trials (RCTs) are not always well reported, especially in terms of their methodological descriptions. This study aimed to investigate the adherence of methodological reporting complying with CONSORT and explore associated trial level variables in the Chinese nursing care field. In June 2012, we identified RCTs published in five leading Chinese nursing journals and included trials with details of randomized methods. The quality of methodological reporting was measured through the methods section of the CONSORT checklist and the overall CONSORT methodological items score was calculated and expressed as a percentage. Meanwhile, we hypothesized that some general and methodological characteristics were associated with reporting quality and conducted a regression with these data to explore the correlation. The descriptive and regression statistics were calculated via SPSS 13.0. In total, 680 RCTs were included. The overall CONSORT methodological items score was 6.34 ± 0.97 (Mean ± SD). No RCT reported descriptions and changes in "trial design," changes in "outcomes" and "implementation," or descriptions of the similarity of interventions for "blinding." Poor reporting was found in detailing the "settings of participants" (13.1%), "type of randomization sequence generation" (1.8%), calculation methods of "sample size" (0.4%), explanation of any interim analyses and stopping guidelines for "sample size" (0.3%), "allocation concealment mechanism" (0.3%), additional analyses in "statistical methods" (2.1%), and targeted subjects and methods of "blinding" (5.9%). More than 50% of trials described randomization sequence generation, the eligibility criteria of "participants," "interventions," and definitions of the "outcomes" and "statistical methods." The regression analysis found that publication year and ITT analysis were weakly associated with CONSORT score. The completeness of methodological reporting of RCTs in the Chinese nursing care field is poor, especially with regard to the reporting of trial design, changes in outcomes, sample size calculation, allocation concealment, blinding, and statistical methods.

  17. Simultenious binary hash and features learning for image retrieval

    NASA Astrophysics Data System (ADS)

    Frantc, V. A.; Makov, S. V.; Voronin, V. V.; Marchuk, V. I.; Semenishchev, E. A.; Egiazarian, K. O.; Agaian, S.

    2016-05-01

    Content-based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo-collection management systems, web-scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image-retrieval technique. It's the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel-based image representation to hash-value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine-tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data- dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state-of-the-art methods.

  18. SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties

    NASA Astrophysics Data System (ADS)

    Panebianco, Stefano; Dubray, Nöel; Goriely, Stéphane; Hilaire, Stéphane; Lemaître, Jean-François; Sida, Jean-Luc

    2014-04-01

    Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY) has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge) can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed.

  19. A general description of additive and nonadditive elements of sperm competitiveness and their relation to male fertilization success.

    PubMed

    Engqvist, Leif

    2013-05-01

    A complete understanding of male reproductive success, and thus sexual selection, often requires an insight into male success in sperm competition. Genuine conclusions on male sperm competitiveness can only be made in real competitive situations. However, statistical analyses of sperm competitiveness from fertilization success data have been shown to be problematic. Here, I first outline a comprehensive general description of the different additive and nonadditive elements relevant for the outcome of sperm competition staged between two males. Based on this description, I will highlight two main problems that are frequently encountered in experiments aiming at estimating sperm competitiveness. First, I focus on potential problems when using standardized competitors versus random mating trials, because trials with standardized competitors do not allow generalization if male-male interactions are important. Second, I illustrate the necessity to analyze data on the logit scale rather than on raw proportions, because only the logit scale allows a clean separation of additive and nonadditive effects (i.e., male × male and female × male interactions). © 2012 The Author(s). Evolution © 2012 The Society for the Study of Evolution.

  20. Fokker-Planck description for the queue dynamics of large tick stocks.

    PubMed

    Garèche, A; Disdier, G; Kockelkoren, J; Bouchaud, J-P

    2013-09-01

    Motivated by empirical data, we develop a statistical description of the queue dynamics for large tick assets based on a two-dimensional Fokker-Planck (diffusion) equation. Our description explicitly includes state dependence, i.e., the fact that the drift and diffusion depend on the volume present on both sides of the spread. "Jump" events, corresponding to sudden changes of the best limit price, must also be included as birth-death terms in the Fokker-Planck equation. All quantities involved in the equation can be calibrated using high-frequency data on the best quotes. One of our central findings is that the dynamical process is approximately scale invariant, i.e., the only relevant variable is the ratio of the current volume in the queue to its average value. While the latter shows intraday seasonalities and strong variability across stocks and time periods, the dynamics of the rescaled volumes is universal. In terms of rescaled volumes, we found that the drift has a complex two-dimensional structure, which is a sum of a gradient contribution and a rotational contribution, both stable across stocks and time. This drift term is entirely responsible for the dynamical correlations between the ask queue and the bid queue.

  1. Fokker-Planck description for the queue dynamics of large tick stocks

    NASA Astrophysics Data System (ADS)

    Garèche, A.; Disdier, G.; Kockelkoren, J.; Bouchaud, J.-P.

    2013-09-01

    Motivated by empirical data, we develop a statistical description of the queue dynamics for large tick assets based on a two-dimensional Fokker-Planck (diffusion) equation. Our description explicitly includes state dependence, i.e., the fact that the drift and diffusion depend on the volume present on both sides of the spread. “Jump” events, corresponding to sudden changes of the best limit price, must also be included as birth-death terms in the Fokker-Planck equation. All quantities involved in the equation can be calibrated using high-frequency data on the best quotes. One of our central findings is that the dynamical process is approximately scale invariant, i.e., the only relevant variable is the ratio of the current volume in the queue to its average value. While the latter shows intraday seasonalities and strong variability across stocks and time periods, the dynamics of the rescaled volumes is universal. In terms of rescaled volumes, we found that the drift has a complex two-dimensional structure, which is a sum of a gradient contribution and a rotational contribution, both stable across stocks and time. This drift term is entirely responsible for the dynamical correlations between the ask queue and the bid queue.

  2. Accurate Identification of MCI Patients via Enriched White-Matter Connectivity Network

    NASA Astrophysics Data System (ADS)

    Wee, Chong-Yaw; Yap, Pew-Thian; Brownyke, Jeffery N.; Potter, Guy G.; Steffens, David C.; Welsh-Bohmer, Kathleen; Wang, Lihong; Shen, Dinggang

    Mild cognitive impairment (MCI), often a prodromal phase of Alzheimer's disease (AD), is frequently considered to be a good target for early diagnosis and therapeutic interventions of AD. Recent emergence of reliable network characterization techniques have made understanding neurological disorders at a whole brain connectivity level possible. Accordingly, we propose a network-based multivariate classification algorithm, using a collection of measures derived from white-matter (WM) connectivity networks, to accurately identify MCI patients from normal controls. An enriched description of WM connections, utilizing six physiological parameters, i.e., fiber penetration count, fractional anisotropy (FA), mean diffusivity (MD), and principal diffusivities (λ 1, λ 2, λ 3), results in six connectivity networks for each subject to account for the connection topology and the biophysical properties of the connections. Upon parcellating the brain into 90 regions-of-interest (ROIs), the average statistics of each ROI in relation to the remaining ROIs are extracted as features for classification. These features are then sieved to select the most discriminant subset of features for building an MCI classifier via support vector machines (SVMs). Cross-validation results indicate better diagnostic power of the proposed enriched WM connection description than simple description with any single physiological parameter.

  3. Review evaluation indicators of health information technology course of master's degree in medical sciences universities' based on CIPP Model.

    PubMed

    Yarmohammadian, Mohammad Hossein; Mohebbi, Nooshin

    2015-01-01

    Sensitivity of teaching and learning processes in universities emphasizes the necessity of assessment of the quality of education which improves the efficiency and effectiveness of the country. This study was conducted with an aim to review and develop the evaluation criteria of health information technology course at Master of Science level in Tehran, Shahid Beheshti, Isfahan, Shiraz, and Kashan medical universities in 2012 by using CIPP model. This was an applied and descriptive research with statistical population of faculty members (23), students (97), directorates (5), and library staff (5), with a total of 130 people, and sampling was done as a census. In order to collect data, four questionnaires were used based on Likert scale with scores ranging from 1 to 5. Questionnaires' validity was confirmed by consulting with health information technology and educational evaluation experts, and questionnaires' reliability of directorates, faculty, students, and library staff was tested using the Cronbach's alpha coefficient formula, which gave r = 0.74, r = 0.93, r = 0.98, and r = 0.80, respectively. SPSS software for data analysis and both descriptive and inferential statistics containing mean, frequency percentage, standard deviation, Pearson correlation, and Spearman correlation were used. With studies from various sources, commentary of experts, and based on the CIPP evaluation model, 139 indicators were determined and then evaluated, which were associated with this course based on the three factors of context, input, and process in the areas of human resources professional, academic services, students, directors, faculty, curriculum, budget, facilities, teaching-learning activities, and scientific research activities of students and faculty, and the activities of the library staff. This study showed that in total, the health information technology course at the Master of Science level is relatively good, but trying to improve and correct it in some areas and continuing the evaluation process seems necessary.

  4. Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework.

    PubMed

    Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana

    2014-06-01

    Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd.

  5. Multi-scale structure and topological anomaly detection via a new network statistic: The onion decomposition.

    PubMed

    Hébert-Dufresne, Laurent; Grochow, Joshua A; Allard, Antoine

    2016-08-18

    We introduce a network statistic that measures structural properties at the micro-, meso-, and macroscopic scales, while still being easy to compute and interpretable at a glance. Our statistic, the onion spectrum, is based on the onion decomposition, which refines the k-core decomposition, a standard network fingerprinting method. The onion spectrum is exactly as easy to compute as the k-cores: It is based on the stages at which each vertex gets removed from a graph in the standard algorithm for computing the k-cores. Yet, the onion spectrum reveals much more information about a network, and at multiple scales; for example, it can be used to quantify node heterogeneity, degree correlations, centrality, and tree- or lattice-likeness. Furthermore, unlike the k-core decomposition, the combined degree-onion spectrum immediately gives a clear local picture of the network around each node which allows the detection of interesting subgraphs whose topological structure differs from the global network organization. This local description can also be leveraged to easily generate samples from the ensemble of networks with a given joint degree-onion distribution. We demonstrate the utility of the onion spectrum for understanding both static and dynamic properties on several standard graph models and on many real-world networks.

  6. Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework†

    PubMed Central

    Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana

    2014-01-01

    Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd. PMID:25505370

  7. Practicality of Elementary Statistics Module Based on CTL Completed by Instructions on Using Software R

    NASA Astrophysics Data System (ADS)

    Delyana, H.; Rismen, S.; Handayani, S.

    2018-04-01

    This research is a development research using 4-D design model (define, design, develop, and disseminate). The results of the define stage are analyzed for the needs of the following; Syllabus analysis, textbook analysis, student characteristics analysis and literature analysis. The results of textbook analysis obtained the description that of the two textbooks that must be owned by students also still difficulty in understanding it, the form of presentation also has not facilitated students to be independent in learning to find the concept, textbooks are also not equipped with data processing referrals by using software R. The developed module is considered valid by the experts. Further field trials are conducted to determine the practicality and effectiveness. The trial was conducted to the students of Mathematics Education Study Program of STKIP PGRI which was taken randomly which has not taken Basic Statistics Course that is as many as 4 people. Practical aspects of attention are easy, time efficient, easy to interpret, and equivalence. The practical value in each aspect is 3.7; 3.79, 3.7 and 3.78. Based on the results of the test students considered that the module has been very practical use in learning. This means that the module developed can be used by students in Elementary Statistics learning.

  8. Tips and Tricks for Successful Application of Statistical Methods to Biological Data.

    PubMed

    Schlenker, Evelyn

    2016-01-01

    This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.

  9. Towards Web-based representation and processing of health information

    PubMed Central

    Gao, Sheng; Mioc, Darka; Yi, Xiaolun; Anton, Francois; Oldfield, Eddie; Coleman, David J

    2009-01-01

    Background There is great concern within health surveillance, on how to grapple with environmental degradation, rapid urbanization, population mobility and growth. The Internet has emerged as an efficient way to share health information, enabling users to access and understand data at their fingertips. Increasingly complex problems in the health field require increasingly sophisticated computer software, distributed computing power, and standardized data sharing. To address this need, Web-based mapping is now emerging as an important tool to enable health practitioners, policy makers, and the public to understand spatial health risks, population health trends and vulnerabilities. Today several web-based health applications generate dynamic maps; however, for people to fully interpret the maps they need data source description and the method used in the data analysis or statistical modeling. For the representation of health information through Web-mapping applications, there still lacks a standard format to accommodate all fixed (such as location) and variable (such as age, gender, health outcome, etc) indicators in the representation of health information. Furthermore, net-centric computing has not been adequately applied to support flexible health data processing and mapping online. Results The authors of this study designed a HEalth Representation XML (HERXML) schema that consists of the semantic (e.g., health activity description, the data sources description, the statistical methodology used for analysis), geometric, and cartographical representations of health data. A case study has been carried on the development of web application and services within the Canadian Geospatial Data Infrastructure (CGDI) framework for community health programs of the New Brunswick Lung Association. This study facilitated the online processing, mapping and sharing of health information, with the use of HERXML and Open Geospatial Consortium (OGC) services. It brought a new solution in better health data representation and initial exploration of the Web-based processing of health information. Conclusion The designed HERXML has been proven to be an appropriate solution in supporting the Web representation of health information. It can be used by health practitioners, policy makers, and the public in disease etiology, health planning, health resource management, health promotion and health education. The utilization of Web-based processing services in this study provides a flexible way for users to select and use certain processing functions for health data processing and mapping via the Web. This research provides easy access to geospatial and health data in understanding the trends of diseases, and promotes the growth and enrichment of the CGDI in the public health sector. PMID:19159445

  10. Data from the Television Game Show "Friend or Foe?"

    ERIC Educational Resources Information Center

    Kalist, David E.

    2004-01-01

    The data discussed in this paper are from the television game show "Friend or Foe", and can be used to examine whether age, gender, race, and the amount of prize money affect contestants' strategies. The data are suitable for a variety of statistical analyses, such as descriptive statistics, testing for differences in means or proportions, and…

  11. Calibrating the Difficulty of an Assessment Tool: The Blooming of a Statistics Examination

    ERIC Educational Resources Information Center

    Dunham, Bruce; Yapa, Gaitri; Yu, Eugenia

    2015-01-01

    Bloom's taxonomy is proposed as a tool by which to assess the level of complexity of assessment tasks in statistics. Guidelines are provided for how to locate tasks at each level of the taxonomy, along with descriptions and examples of suggested test questions. Through the "Blooming" of an examination--that is, locating its constituent…

  12. Examples of Data Analysis with SPSS/PC+ Studentware.

    ERIC Educational Resources Information Center

    MacFarland, Thomas W.

    Intended for classroom use only, these unpublished notes contain computer lessons on descriptive statistics with files previously created in WordPerfect 4.2 and Lotus 1-2-3 Version 1.A for the IBM PC+. The statistical measures covered include Student's t-test with two independent samples; Student's t-test with a paired sample; Chi-square analysis;…

  13. Prior-to-Secondary School Course Classification System: School Codes for the Exchange of Data (SCED). NFES 2011-801

    ERIC Educational Resources Information Center

    National Forum on Education Statistics, 2011

    2011-01-01

    In this handbook, "Prior-to-Secondary School Course Classification System: School Codes for the Exchange of Data" (SCED), the National Center for Education Statistics (NCES) and the National Forum on Education Statistics have extended the existing secondary course classification system with codes and descriptions for courses offered at…

  14. Illinois' Forests, 2005: Statistics, Methods, and Quality Assurance

    Treesearch

    Susan J. Crocker; Charles J. Barnett; Mark A. Hatfield

    2013-01-01

    The first full annual inventory of Illinois' forests was completed in 2005. This report contains 1) descriptive information on methods, statistics, and quality assurance of data collection, 2) a glossary of terms, 3) tables that summarize quality assurance, and 4) a core set of tabular estimates for a variety of forest resources. A detailed analysis of inventory...

  15. Community College Low-Income and Minority Student Completion Study: Descriptive Statistics from the 1992 High School Cohort

    ERIC Educational Resources Information Center

    Bailey, Thomas; Jenkins, Davis; Leinbach, Timothy

    2005-01-01

    This report summarizes statistics on access and attainment in higher education, focusing particularly on community college students, using data from the National Education Longitudinal Study of 1988 (NELS:88), which follows a nationally representative sample of individuals who were eighth graders in the spring of 1988. A sample of these…

  16. Fundamental Statistical Descriptions of Plasma Turbulence in Magnetic Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John A. Krommes

    2001-02-16

    A pedagogical review of the historical development and current status (as of early 2000) of systematic statistical theories of plasma turbulence is undertaken. Emphasis is on conceptual foundations and methodology, not practical applications. Particular attention is paid to equations and formalism appropriate to strongly magnetized, fully ionized plasmas. Extensive reference to the literature on neutral-fluid turbulence is made, but the unique properties and problems of plasmas are emphasized throughout. Discussions are given of quasilinear theory, weak-turbulence theory, resonance-broadening theory, and the clump algorithm. Those are developed independently, then shown to be special cases of the direct-interaction approximation (DIA), which providesmore » a central focus for the article. Various methods of renormalized perturbation theory are described, then unified with the aid of the generating-functional formalism of Martin, Siggia, and Rose. A general expression for the renormalized dielectric function is deduced and discussed in detail. Modern approaches such as decimation and PDF methods are described. Derivations of DIA-based Markovian closures are discussed. The eddy-damped quasinormal Markovian closure is shown to be nonrealizable in the presence of waves, and a new realizable Markovian closure is presented. The test-field model and a realizable modification thereof are also summarized. Numerical solutions of various closures for some plasma-physics paradigms are reviewed. The variational approach to bounds on transport is developed. Miscellaneous topics include Onsager symmetries for turbulence, the interpretation of entropy balances for both kinetic and fluid descriptions, self-organized criticality, statistical interactions between disparate scales, and the roles of both mean and random shear. Appendices are provided on Fourier transform conventions, dimensional and scaling analysis, the derivations of nonlinear gyrokinetic and gyrofluid equations, stochasticity criteria for quasilinear theory, formal aspects of resonance-broadening theory, Novikov's theorem, the treatment of weak inhomogeneity, the derivation of the Vlasov weak-turbulence wave kinetic equation from a fully renormalized description, some features of a code for solving the direct-interaction approximation and related Markovian closures, the details of the solution of the EDQNM closure for a solvable three-wave model, and the notation used in the article.« less

  17. [Diversity and frequency of scientific research design and statistical methods in the "Arquivos Brasileiros de Oftalmologia": a systematic review of the "Arquivos Brasileiros de Oftalmologia"--1993-2002].

    PubMed

    Crosta, Fernando; Nishiwaki-Dantas, Maria Cristina; Silvino, Wilmar; Dantas, Paulo Elias Correa

    2005-01-01

    To verify the frequency of study design, applied statistical analysis and approval by institutional review offices (Ethics Committee) of articles published in the "Arquivos Brasileiros de Oftalmologia" during a 10-year interval, with later comparative and critical analysis by some of the main international journals in the field of Ophthalmology. Systematic review without metanalysis was performed. Scientific papers published in the "Arquivos Brasileiros de Oftalmologia" between January 1993 and December 2002 were reviewed by two independent reviewers and classified according to the applied study design, statistical analysis and approval by the institutional review offices. To categorize those variables, a descriptive statistical analysis was used. After applying inclusion and exclusion criteria, 584 articles for evaluation of statistical analysis and, 725 articles for evaluation of study design were reviewed. Contingency table (23.10%) was the most frequently applied statistical method, followed by non-parametric tests (18.19%), Student's t test (12.65%), central tendency measures (10.60%) and analysis of variance (9.81%). Of 584 reviewed articles, 291 (49.82%) presented no statistical analysis. Observational case series (26.48%) was the most frequently used type of study design, followed by interventional case series (18.48%), observational case description (13.37%), non-random clinical study (8.96%) and experimental study (8.55%). We found a higher frequency of observational clinical studies, lack of statistical analysis in almost half of the published papers. Increase in studies with approval by institutional review Ethics Committee was noted since it became mandatory in 1996.

  18. Descriptive statistics and correlation analysis of agronomic traits in a maize recombinant inbred line population.

    PubMed

    Zhang, H M; Hui, G Q; Luo, Q; Sun, Y; Liu, X H

    2014-01-21

    Maize (Zea mays L.) is one of the most important crops in the world. In this study, 13 agronomic traits of a recombinant inbred line population that was derived from the cross between Mo17 and Huangzao4 were investigated in maize: ear diameter, ear length, ear axis diameter, ear weight, plant height, ear height, days to pollen shed (DPS), days to silking (DS), the interval between DPS and DS, 100-kernel weight, kernel test weight, ear kernel weight, and kernel rate. Furthermore, the descriptive statistics and correlation analysis of the 13 traits were performed using the SPSS 11.5 software. The results providing the phenotypic data here are needed for the quantitative trait locus mapping of these agronomic traits.

  19. [Congresses of the Spanish Association of Pediatrics: bibliometric analysis as a springboard for debate].

    PubMed

    González de Dios, J; Paredes Cencillo, C

    2004-12-01

    Congresses are periodic meetings that are required to make known and discuss advances in the various fields of medicine. Bibliometric indicators are important tools used to determine the quality of scientific publications. However, this type of study is infrequently performed in free communications of congresses. A bibliometric study of all the free communications published in the congresses of the Spanish Association of Pediatrics over 4 years, divided in two periods (1996-1997 and 2000-2001) (n = 2677) was performed. Bibliometric indicators were classified into quantitative (productivity), qualitative (statistical accessibility) and scientific evidence. Quantitative indicators: There were 928 free communications in 1996, 681 in 1997, 560 in 2000, and 508 in 2001. Eighty-eight percent were in poster format and 87 % were in structured format. There was a median of six authors per communication. The main subject areas were infectology, neonatology, hemato-oncology, neurology and endocrinology. Ninety-five per cent of communications were signed by hospitals with a marked contribution by hospitals in Andalusia and Madrid. Qualitative indicators: Statistical accessibility < 2 in 86 % and > 7 in 2.9 %. Scientific evidence indicators: The quality of scientific evidence was good in only 1 % and was average in 9 %, since 90 % of all the studies were descriptive (mainly clinical cases). Evidence-based methodological concepts were used in only 1.9 %. Compared with 1996-1997, in 2000-2001 there were fewer communications, more posters, and more structured communications, as well as greater statistical accessibility and better scientific evidence indicators, but these differences were not statistically significant. Bibliometric study of the congresses of the Spanish Association of Pediatrics is a good starting point to analyze the quality of pediatric meetings and discuss possible solutions: a rigorous scientific committee with quality criteria, more analytical and/or experimental studies and fewer descriptive studies (especially clinical cases); restricting the number of authors per communication, greater collaboration with epidemiologists and/or biostatisticians, and favoring structured communications would also improve quality.

  20. Novel method of fabricating individual trays for maxillectomy patients by computer-aided design and rapid prototyping.

    PubMed

    Huang, Zhi; Wang, Xin-zhi; Hou, Yue-Zhong

    2015-02-01

    Making impressions for maxillectomy patients is an essential but difficult task. This study developed a novel method to fabricate individual trays by computer-aided design (CAD) and rapid prototyping (RP) to simplify the process and enhance patient safety. Five unilateral maxillectomy patients were recruited for this study. For each patient, a computed tomography (CT) scan was taken. Based on the 3D surface reconstruction of the target area, an individual tray was manufactured by CAD/RP. With a conventional custom tray as control, two final impressions were made using the different types of tray for each patient. The trays were sectioned, and in each section the thickness of the material was measured at six evenly distributed points. Descriptive statistics and paired t-test were used to examine the difference of the impression thickness. SAS 9.3 was applied in the statistical analysis. Afterwards, all casts were then optically 3D scanned and compared digitally to evaluate the feasibility of this method. Impressions of all five maxillectomy patients were successfully made with individual trays fabricated by CAD/RP and traditional trays. The descriptive statistics of impression thickness measurement showed slightly more uneven results in the traditional trays, but no statistical significance was shown. A 3D digital comparison showed acceptable discrepancies within 1 mm in the majority of cast areas. The largest difference of 3 mm was observed in the buccal wall of the defective areas. Moderate deviations of 1 to 2 mm were detected in the buccal and labial vestibular groove areas. This study confirmed the feasibility of a novel method of fabricating individual trays by CAD/RP. Impressions made by individual trays manufactured using CAD/RP had a uniform thickness, with an acceptable level of accuracy compared to those made through conventional processes. © 2014 by the American College of Prosthodontists.

  1. Description of Respiratory Microbiology of Children With Long-Term Tracheostomies.

    PubMed

    McCaleb, Rachael; Warren, Robert H; Willis, Denise; Maples, Holly D; Bai, Shasha; O'Brien, Catherine E

    2016-04-01

    There is little evidence in the medical literature to guide empiric treatment of pediatric patients with long-term tracheostomies who present with signs and symptoms of a bacterial respiratory infection. The overall goal of this study was to describe the respiratory microbiology in this study population at our institution. This study was a retrospective chart review of all subjects with tracheostomies currently receiving care at the Arkansas Center for Respiratory Technology Dependent Children. Descriptive statistics were used to describe the respiratory microbiology of the full study group. Several subgroup analyses were conducted, including description of microbiology according to time with tracheostomy, mean time to isolation of specific organisms after the tracheostomy tube was placed, association between Pseudomonas aeruginosa or methicillin-resistant Staphylococcus aureus isolation and prescribed antibiotic courses, and description of microbiology according to level of chronic respiratory support. Available respiratory culture results up to July 2011 were collected for all eligible subjects. Descriptive statistics were used to describe subject characteristics, and chi-square analysis was used to analyze associations between categorical data. P < .05 was considered statistically significant. A total of 93 subjects met inclusion criteria for the study. The median (interquartile range) age at time of tracheotomy was 0.84 (0.36-3.25) y, and the median (interquartile range) time with tracheostomy was 4.29 (2.77-9.49) y. The most common organism isolated was P. aeruginosa (90.3%), with Gram-negative organisms predominating. However, 55.9% of the study population had a respiratory culture positive for methicillin-resistant S. aureus. The first organism isolated after tracheostomy placement was Methiciliin-sensitive S. aureus was isolated the soonest after tracheostomy placement. Specific organisms were not related to level of chronic respiratory support or likelihood of receiving antibiotics. This study provides an updated overview of the variety of potential pathogens isolated from respiratory cultures of pediatric subjects with long-term tracheostomies. Copyright © 2016 by Daedalus Enterprises.

  2. Psychological methodology will change profoundly due to the necessity to focus on intra-individual variation.

    PubMed

    Molenaar, Peter C M

    2007-03-01

    I am in general agreement with Toomela's (Integrative Psychological and Behavioral Science doi:10.1007/s12124-007-9004-0, 2007) plea for an alternative psychological methodology inspired by his description of the German-Austrian orientation. I will argue, however, that this alternative methodology has to be based on the classical ergodic theorems, using state-of-the-art statistical time series analysis of intra-individual variation as its main tool. Some more specific points made by Toomela will be criticized, while for others a more extreme elaboration along the lines indicated by Toomela is proposed.

  3. Methods and application of system identification in shock and vibration.

    NASA Technical Reports Server (NTRS)

    Collins, J. D.; Young, J. P.; Kiefling, L.

    1972-01-01

    A logical picture is presented of current useful system identification techniques in the shock and vibration field. A technology tree diagram is developed for the purpose of organizing and categorizing the widely varying approaches according to the fundamental nature of each. Specific examples of accomplished activity for each identification category are noted and discussed. To provide greater insight into the most current trends in the system identification field, a somewhat detailed description is presented of the essential features of a recently developed technique that is based on making the maximum use of all statistically known information about a system.

  4. The relationship between budget allocated and budget utilized of faculties in an academic institution

    NASA Astrophysics Data System (ADS)

    Aziz, Wan Noor Hayatie Wan Abdul; Aziz, Rossidah Wan Abdul; Shuib, Adibah; Razi, Nor Faezah Mohamad

    2014-06-01

    Budget planning enables an organization to set priorities towards achieving certain goals and to identify the highest priorities to be accomplished with the available funds, thus allowing allocation of resources according to the set priorities and constraints. On the other hand, budget execution and monitoring enables allocated funds or resources to be utilized as planned. Our study concerns with investigating the relationship between budget allocation and budget utilization of faculties in a public university in Malaysia. The focus is on the university's operations management financial allocation and utilization based on five categories which are emolument expenditure, academic or services and supplies expenditure, maintenance expenditure, student expenditure and others expenditure. The analysis on financial allocation and utilization is performed based on yearly quarters. Data collected include three years faculties' budget allocation and budget utilization performance involving a sample of ten selected faculties of a public university in Malaysia. Results show that there are positive correlation and significant relationship between quarterly budget allocation and quarterly budget utilization. This study found that emolument give the highest contribution to the total allocation and total utilization for all quarters. This paper presents some findings based on statistical analysis conducted which include descriptive statistics and correlation analysis.

  5. General aviation activity and avionics survey. 1978. Annual summary report cy 1978

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwenk, J.C.

    1980-03-01

    This report presents the results and a description of the 1978 General Aviation Activity and Avionics Survey. The survey was conducted during early 1979 by the FAA to obtain information on the activity and avionics of the United States registered general aviation aircraft fleet, the dominant component of civil aviation in the U.S. The survey was based on a statistically selected sample of about 13.3 percent of the general aviation fleet and obtained a response rate of 74 percent. Survey results are based upon responses but are expanded upward to represent the total population. Survey results revealed that during 1978more » an estimated 39.4 million hours of flying time were logged by the 198,778 active general aviation aircraft in the U.S. fleet, yielding a mean annual flight time per aircraft of 197.7 hours. The active aircraft represented 85 percent of the registered general aviation fleet. The report contains breakdowns of these and other statistics by manufacturer/model group, aircraft type, state and region of based aircraft, and primary use. Also included are fuel consumption, lifetime airframe hours, avionics, and engine hours estimates.« less

  6. Brief communication: Skeletal biology past and present: Are we moving in the right direction?

    PubMed

    Hens, Samantha M; Godde, Kanya

    2008-10-01

    In 1982, Spencer's edited volume A History of American Physical Anthropology: 1930-1980 allowed numerous authors to document the state of our science, including a critical examination of skeletal biology. Some authors argued that the first 50 years of skeletal biology were characterized by the descriptive-historical approach with little regard for processual problems and that technological and statistical analyses were not rooted in theory. In an effort to determine whether Spencer's landmark volume impacted the field of skeletal biology, a content analysis was carried out for the American Journal of Physical Anthropology from 1980 to 2004. The percentage of skeletal biology articles is similar to that of previous decades. Analytical articles averaged only 32% and are defined by three criteria: statistical analysis, hypothesis testing, and broader explanatory context. However, when these criteria were scored individually, nearly 80% of papers attempted a broader theoretical explanation, 44% tested hypotheses, and 67% used advanced statistics, suggesting that the skeletal biology papers in the journal have an analytical emphasis. Considerable fluctuation exists between subfields; trends toward a more analytical approach are witnessed in the subfields of age/sex/stature/demography, skeletal maturation, anatomy, and nonhuman primate studies, which also increased in frequency, while paleontology and pathology were largely descriptive. Comparisons to the International Journal of Osteoarchaeology indicate that there are statistically significant differences between the two journals in terms of analytical criteria. These data indicate a positive shift in theoretical thinking, i.e., an attempt by most to explain processes rather than present a simple description of events.

  7. A two-component rain model for the prediction of attenuation and diversity improvement

    NASA Technical Reports Server (NTRS)

    Crane, R. K.

    1982-01-01

    A new model was developed to predict attenuation statistics for a single Earth-satellite or terrestrial propagation path. The model was extended to provide predictions of the joint occurrences of specified or higher attenuation values on two closely spaced Earth-satellite paths. The joint statistics provide the information required to obtain diversity gain or diversity advantage estimates. The new model is meteorologically based. It was tested against available Earth-satellite beacon observations and terrestrial path measurements. The model employs the rain climate region descriptions of the Global rain model. The rms deviation between the predicted and observed attenuation values for the terrestrial path data was 35 percent, a result consistent with the expectations of the Global model when the rain rate distribution for the path is not used in the calculation. Within the United States the rms deviation between measurement and prediction was 36 percent but worldwide it was 79 percent.

  8. Science and Facebook: The same popularity law!

    PubMed

    Néda, Zoltán; Varga, Levente; Biró, Tamás S

    2017-01-01

    The distribution of scientific citations for publications selected with different rules (author, topic, institution, country, journal, etc…) collapse on a single curve if one plots the citations relative to their mean value. We find that the distribution of "shares" for the Facebook posts rescale in the same manner to the very same curve with scientific citations. This finding suggests that citations are subjected to the same growth mechanism with Facebook popularity measures, being influenced by a statistically similar social environment and selection mechanism. In a simple master-equation approach the exponential growth of the number of publications and a preferential selection mechanism leads to a Tsallis-Pareto distribution offering an excellent description for the observed statistics. Based on our model and on the data derived from PubMed we predict that according to the present trend the average citations per scientific publications exponentially relaxes to about 4.

  9. Physisorption and desorption of H2, HD and D2 on amorphous solid water ice. Effect on mixing isotopologue on statistical population of adsorption sites.

    PubMed

    Amiaud, Lionel; Fillion, Jean-Hugues; Dulieu, François; Momeni, Anouchah; Lemaire, Jean-Louis

    2015-11-28

    We study the adsorption and desorption of three isotopologues of molecular hydrogen mixed on 10 ML of porous amorphous water ice (ASW) deposited at 10 K. Thermally programmed desorption (TPD) of H2, D2 and HD adsorbed at 10 K have been performed with different mixings. Various coverages of H2, HD and D2 have been explored and a model taking into account all species adsorbed on the surface is presented in detail. The model we propose allows to extract the parameters required to fully reproduce the desorption of H2, HD and D2 for various coverages and mixtures in the sub-monolayer regime. The model is based on a statistical description of the process in a grand-canonical ensemble where adsorbed molecules are described following a Fermi-Dirac distribution.

  10. Science and Facebook: The same popularity law!

    PubMed Central

    Varga, Levente; Biró, Tamás S.

    2017-01-01

    The distribution of scientific citations for publications selected with different rules (author, topic, institution, country, journal, etc…) collapse on a single curve if one plots the citations relative to their mean value. We find that the distribution of “shares” for the Facebook posts rescale in the same manner to the very same curve with scientific citations. This finding suggests that citations are subjected to the same growth mechanism with Facebook popularity measures, being influenced by a statistically similar social environment and selection mechanism. In a simple master-equation approach the exponential growth of the number of publications and a preferential selection mechanism leads to a Tsallis-Pareto distribution offering an excellent description for the observed statistics. Based on our model and on the data derived from PubMed we predict that according to the present trend the average citations per scientific publications exponentially relaxes to about 4. PMID:28678796

  11. Adsorption of diclofenac and nimesulide on activated carbon: Statistical physics modeling and effect of adsorbate size

    NASA Astrophysics Data System (ADS)

    Sellaoui, Lotfi; Mechi, Nesrine; Lima, Éder Cláudio; Dotto, Guilherme Luiz; Ben Lamine, Abdelmottaleb

    2017-10-01

    Based on statistical physics elements, the equilibrium adsorption of diclofenac (DFC) and nimesulide (NM) on activated carbon was analyzed by a multilayer model with saturation. The paper aimed to describe experimentally and theoretically the adsorption process and study the effect of adsorbate size using the model parameters. From numerical simulation, the number of molecules per site showed that the adsorbate molecules (DFC and NM) were mostly anchored in both sides of the pore walls. The receptor sites density increase suggested that additional sites appeared during the process, to participate in DFC and NM adsorption. The description of the adsorption energy behavior indicated that the process was physisorption. Finally, by a model parameters correlation, the size effect of the adsorbate was deduced indicating that the molecule dimension has a negligible effect on the DFC and NM adsorption.

  12. [Child nutritional status in contexts of urban poverty: a reliable indicator of family health?

    PubMed

    Huergo, Juliana; Casabona, Eugenia Lourdes

    2016-03-01

    This work questions the premise that the nutritional status of children under six years of age is a reliable indicator of family health. To do so, a research strategy based in case studies was carried out, following a qualitative design (participant observation and semistructured interviews using intentional sampling) and framed within the interpretivist paradigm. The anthropometric measurements of 20 children under six years of age attending the local Child Care Center in Villa La Tela, Córdoba were evaluated. Nutritional status was understood as an object that includes socially determined biological processes, and was therefore posited analytically as a cross between statistical data and its social determination. As a statistic, child nutritional status is merely descriptive; to assist in the understanding of its social determination, it must be placed in dialectical relationship with the spheres of sociability proposed to analyze the reproduction of health problems.

  13. A statistical approach based on accumulated degree-days to predict decomposition-related processes in forensic studies.

    PubMed

    Michaud, Jean-Philippe; Moreau, Gaétan

    2011-01-01

    Using pig carcasses exposed over 3 years in rural fields during spring, summer, and fall, we studied the relationship between decomposition stages and degree-day accumulation (i) to verify the predictability of the decomposition stages used in forensic entomology to document carcass decomposition and (ii) to build a degree-day accumulation model applicable to various decomposition-related processes. Results indicate that the decomposition stages can be predicted with accuracy from temperature records and that a reliable degree-day index can be developed to study decomposition-related processes. The development of degree-day indices opens new doors for researchers and allows for the application of inferential tools unaffected by climatic variability, as well as for the inclusion of statistics in a science that is primarily descriptive and in need of validation methods in courtroom proceedings. © 2010 American Academy of Forensic Sciences.

  14. Concussion Management in the Classroom.

    PubMed

    Graff, Danielle M; Caperell, Kerry S

    2016-12-01

    There is a new emphasis on the team approach to pediatric concussion management, particularly in the classroom. However, it is expected that educators are unfamiliar with the "Returning to Learning" recommendations. The authors' primary objective was to assess and improve high school educators' knowledge regarding concussions and management interventions using an online education tool. A total of 247 high school educators completed a 12 question pretest to assess core knowledge of concussions and classroom management followed by a 20-minute online literature-based education module. Participants then completed an identical posttest. The improvement in core knowledge was statistically significant (P < .001). Initial areas of weakness were the description and identification of concussions. Questions regarding concussion classroom management also showed a statistically significant increase in scores (P < .001). This study identifies the deficits in the knowledge of educators regarding concussions and classroom management as well as the significant improvement after an online educational module. © The Author(s) 2016.

  15. Comparison of simultaneously recorded [H2(15)O]-PET and LORETA during cognitive and pharmacological activation.

    PubMed

    Gamma, Alex; Lehmann, Dietrich; Frei, Edi; Iwata, Kazuki; Pascual-Marqui, Roberto D; Vollenweider, Franz X

    2004-06-01

    The complementary strengths and weaknesses of established functional brain imaging methods (high spatial, low temporal resolution) and EEG-based techniques (low spatial, high temporal resolution) make their combined use a promising avenue for studying brain processes at a more fine-grained level. However, this strategy requires a better understanding of the relationship between hemodynamic/metabolic and neuroelectric measures of brain activity. We investigated possible correspondences between cerebral blood flow (CBF) as measured by [H2O]-PET and intracerebral electric activity computed by Low Resolution Brain Electromagnetic Tomography (LORETA) from scalp-recorded multichannel EEG in healthy human subjects during cognitive and pharmacological stimulation. The two imaging modalities were compared by descriptive, correlational, and variance analyses, the latter carried out using statistical parametric mapping (SPM99). Descriptive visual comparison showed a partial overlap between the sets of active brain regions detected by the two modalities. A number of exclusively positive correlations of neuroelectric activity with regional CBF were found across the whole EEG frequency range, including slow wave activity, the latter finding being in contrast to most previous studies conducted in patients. Analysis of variance revealed an extensive lack of statistically significant correspondences between brain activity changes as measured by PET vs. EEG-LORETA. In general, correspondences, to the extent they were found, were dependent on experimental condition, brain region, and EEG frequency. Copyright 2004 Wiley-Liss, Inc.

  16. Progress with modeling activity landscapes in drug discovery.

    PubMed

    Vogt, Martin

    2018-04-19

    Activity landscapes (ALs) are representations and models of compound data sets annotated with a target-specific activity. In contrast to quantitative structure-activity relationship (QSAR) models, ALs aim at characterizing structure-activity relationships (SARs) on a large-scale level encompassing all active compounds for specific targets. The popularity of AL modeling has grown substantially with the public availability of large activity-annotated compound data sets. AL modeling crucially depends on molecular representations and similarity metrics used to assess structural similarity. Areas covered: The concepts of AL modeling are introduced and its basis in quantitatively assessing molecular similarity is discussed. The different types of AL modeling approaches are introduced. AL designs can broadly be divided into three categories: compound-pair based, dimensionality reduction, and network approaches. Recent developments for each of these categories are discussed focusing on the application of mathematical, statistical, and machine learning tools for AL modeling. AL modeling using chemical space networks is covered in more detail. Expert opinion: AL modeling has remained a largely descriptive approach for the analysis of SARs. Beyond mere visualization, the application of analytical tools from statistics, machine learning and network theory has aided in the sophistication of AL designs and provides a step forward in transforming ALs from descriptive to predictive tools. To this end, optimizing representations that encode activity relevant features of molecules might prove to be a crucial step.

  17. Growth status of Korean orphans raised in the affluent West: anthropometric trend, multivariate determinants, and descriptive comparison with their North and South Korean peers.

    PubMed

    Schwekendiek, Daniel J

    2017-04-01

    This paper investigates the trend in height among adult Korean orphans who were adopted in early life into affluent Western nations. Final heights of 148 females were analyzed based on a Korean government survey conducted in 2008. Height of the orphans was descriptively compared against final heights of South and North Koreans. Furthermore, statistical determinants of orphan height were investigated in multivariate regressions. Mean height of Korean orphans was 160.44 cm (SD 5.89), which was higher than that of South Koreans at 158.83 cm (SD 5.01). Both Korean orphans and South Koreans were taller than North Koreans at 155.30 cm (SD 4.94). However, height of Korean orphans stagnated at around 160-161 cm while those of North and South Koreans improved over time. In the regression analysis, the socioeconomic status of the adoptive family was statistically significant in all models, while dummies for the adoptive nations and age at adoption were insignificant. This study shows that the mean final height of women experiencing extreme environmental improvements in early-life is capped at 160-161 cm, tentatively suggesting that social stress factors in the host nation or early-life factors in the birth nation might have offset some of the environmental enrichment effects achieved through intercountry adoption.

  18. FTree query construction for virtual screening: a statistical analysis.

    PubMed

    Gerlach, Christof; Broughton, Howard; Zaliani, Andrea

    2008-02-01

    FTrees (FT) is a known chemoinformatic tool able to condense molecular descriptions into a graph object and to search for actives in large databases using graph similarity. The query graph is classically derived from a known active molecule, or a set of actives, for which a similar compound has to be found. Recently, FT similarity has been extended to fragment space, widening its capabilities. If a user were able to build a knowledge-based FT query from information other than a known active structure, the similarity search could be combined with other, normally separate, fields like de-novo design or pharmacophore searches. With this aim in mind, we performed a comprehensive analysis of several databases in terms of FT description and provide a basic statistical analysis of the FT spaces so far at hand. Vendors' catalogue collections and MDDR as a source of potential or known "actives", respectively, have been used. With the results reported herein, a set of ranges, mean values and standard deviations for several query parameters are presented in order to set a reference guide for the users. Applications on how to use this information in FT query building are also provided, using a newly built 3D-pharmacophore from 57 5HT-1F agonists and a published one which was used for virtual screening for tRNA-guanine transglycosylase (TGT) inhibitors.

  19. FTree query construction for virtual screening: a statistical analysis

    NASA Astrophysics Data System (ADS)

    Gerlach, Christof; Broughton, Howard; Zaliani, Andrea

    2008-02-01

    FTrees (FT) is a known chemoinformatic tool able to condense molecular descriptions into a graph object and to search for actives in large databases using graph similarity. The query graph is classically derived from a known active molecule, or a set of actives, for which a similar compound has to be found. Recently, FT similarity has been extended to fragment space, widening its capabilities. If a user were able to build a knowledge-based FT query from information other than a known active structure, the similarity search could be combined with other, normally separate, fields like de-novo design or pharmacophore searches. With this aim in mind, we performed a comprehensive analysis of several databases in terms of FT description and provide a basic statistical analysis of the FT spaces so far at hand. Vendors' catalogue collections and MDDR as a source of potential or known "actives", respectively, have been used. With the results reported herein, a set of ranges, mean values and standard deviations for several query parameters are presented in order to set a reference guide for the users. Applications on how to use this information in FT query building are also provided, using a newly built 3D-pharmacophore from 57 5HT-1F agonists and a published one which was used for virtual screening for tRNA-guanine transglycosylase (TGT) inhibitors.

  20. Pairwise Maximum Entropy Models for Studying Large Biological Systems: When They Can Work and When They Can't

    PubMed Central

    Roudi, Yasser; Nirenberg, Sheila; Latham, Peter E.

    2009-01-01

    One of the most critical problems we face in the study of biological systems is building accurate statistical descriptions of them. This problem has been particularly challenging because biological systems typically contain large numbers of interacting elements, which precludes the use of standard brute force approaches. Recently, though, several groups have reported that there may be an alternate strategy. The reports show that reliable statistical models can be built without knowledge of all the interactions in a system; instead, pairwise interactions can suffice. These findings, however, are based on the analysis of small subsystems. Here, we ask whether the observations will generalize to systems of realistic size, that is, whether pairwise models will provide reliable descriptions of true biological systems. Our results show that, in most cases, they will not. The reason is that there is a crossover in the predictive power of pairwise models: If the size of the subsystem is below the crossover point, then the results have no predictive power for large systems. If the size is above the crossover point, then the results may have predictive power. This work thus provides a general framework for determining the extent to which pairwise models can be used to predict the behavior of large biological systems. Applied to neural data, the size of most systems studied so far is below the crossover point. PMID:19424487

Top