Sample records for statistically designed series

  1. Introducing Statistical Research to Undergraduate Mathematical Statistics Students Using the Guitar Hero Video Game Series

    ERIC Educational Resources Information Center

    Ramler, Ivan P.; Chapman, Jessica L.

    2011-01-01

    In this article we describe a semester-long project, based on the popular video game series Guitar Hero, designed to introduce upper-level undergraduate statistics students to statistical research. Some of the goals of this project are to help students develop statistical thinking that allows them to approach and answer open-ended research…

  2. More on Time Series Designs: A Reanalysis of Mayer and Kozlow's Data.

    ERIC Educational Resources Information Center

    Willson, Victor L.

    1982-01-01

    Differentiating between time-series design and time-series analysis, examines design considerations and reanalyzes data previously reported by Mayer and Kozlow in this journal. The current analysis supports the analysis performed by Mayer and Kozlow but puts the results on a somewhat firmer statistical footing. (Author/JN)

  3. What Time-Series Designs May Have to Offer Educational Researchers.

    ERIC Educational Resources Information Center

    Kratochwill, Thomas R.; Levin, Joel R.

    1978-01-01

    The promise of time-series designs for educational research and evaluation is reviewed. Ten time-series designs are presented and discussed in the context of threats to internal and external validity. The advantages and disadvantages of various visual and statistical data-analysis techniques are presented. A bibliography is appended. (Author/RD)

  4. Examining the Internal Validity and Statistical Precision of the Comparative Interrupted Time Series Design by Comparison with a Randomized Experiment

    ERIC Educational Resources Information Center

    St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly

    2014-01-01

    Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment…

  5. Research in Stochastic Processes.

    DTIC Science & Technology

    1984-10-01

    Handbook of Statistics, Volume 5: Time Series in Time Domain, E.J. Hannan, P.R. Krishnaiah and M.M. Rao, eds., North Holland, 1984, to appear. 5. J.A...designs for time series." S. Cambanis, Handbook of Statistics. Volume 5: Time Series in Time Domain, E.J. Hannan, P.R. Krishnaiah and M.M. Rao, eds... Krishnaiah and M.M. Rao, eds., North Holland, 1984, to appear. 59. "Ergodic properties of stationary stable processes." S. Cambanis, C.D. Hardin, and A

  6. The Value of Interrupted Time-Series Experiments for Community Intervention Research

    PubMed Central

    Biglan, Anthony; Ary, Dennis; Wagenaar, Alexander C.

    2015-01-01

    Greater use of interrupted time-series experiments is advocated for community intervention research. Time-series designs enable the development of knowledge about the effects of community interventions and policies in circumstances in which randomized controlled trials are too expensive, premature, or simply impractical. The multiple baseline time-series design typically involves two or more communities that are repeatedly assessed, with the intervention introduced into one community at a time. It is particularly well suited to initial evaluations of community interventions and the refinement of those interventions. This paper describes the main features of multiple baseline designs and related repeated-measures time-series experiments, discusses the threats to internal validity in multiple baseline designs, and outlines techniques for statistical analyses of time-series data. Examples are given of the use of multiple baseline designs in evaluating community interventions and policy changes. PMID:11507793

  7. Functions and Statistics: International Space Station: Up to Us. NASA Connect: Program 5 in the 2000-2001 Series.

    ERIC Educational Resources Information Center

    National Aeronautics and Space Administration, Hampton, VA. Langley Research Center.

    This teaching unit is designed to help students in grades 5 to 8 explore the concepts of functions and statistics in the context of the International Space Station (ISS). The units in the series have been developed to enhance and enrich mathematics, science, and technology education and to accommodate different teaching and learning styles. Each…

  8. What's in a Name? The Incorrect Use of Case Series as a Study Design Label in Studies Involving Dogs and Cats.

    PubMed

    Sargeant, J M; O'Connor, A M; Cullen, J N; Makielski, K M; Jones-Bitton, A

    2017-07-01

    Study design labels are used to identify relevant literature to address specific clinical and research questions and to aid in evaluating the evidentiary value of research. Evidence from the human healthcare literature indicates that the label "case series" may be used inconsistently and inappropriately. Our primary objective was to determine the proportion of studies in the canine and feline veterinary literature labeled as case series that actually corresponded to descriptive cohort studies, population-based cohort studies, or other study designs. Our secondary objective was to identify the proportion of case series in which potentially inappropriate inferential statements were made. Descriptive evaluation of published literature. One-hundred published studies (from 19 journals) labeled as case series. Studies were identified by a structured literature search, with random selection of 100 studies from the relevant citations. Two reviewers independently characterized each study, with disagreements resolved by consensus. Of the 100 studies, 16 were case series. The remaining studies were descriptive cohort studies (35), population-based cohort studies (36), or other observational or experimental study designs (13). Almost half (48.8%) of the case series or descriptive cohort studies, with no control group and no formal statistical analysis, included inferential statements about the efficacy of treatment or statistical significance of potential risk factors. Authors, peer-reviewers, and editors should carefully consider the design elements of a study to accurately identify and label the study design. Doing so will facilitate an understanding of the evidentiary value of the results. Copyright © 2017 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  9. [Diversity and frequency of scientific research design and statistical methods in the "Arquivos Brasileiros de Oftalmologia": a systematic review of the "Arquivos Brasileiros de Oftalmologia"--1993-2002].

    PubMed

    Crosta, Fernando; Nishiwaki-Dantas, Maria Cristina; Silvino, Wilmar; Dantas, Paulo Elias Correa

    2005-01-01

    To verify the frequency of study design, applied statistical analysis and approval by institutional review offices (Ethics Committee) of articles published in the "Arquivos Brasileiros de Oftalmologia" during a 10-year interval, with later comparative and critical analysis by some of the main international journals in the field of Ophthalmology. Systematic review without metanalysis was performed. Scientific papers published in the "Arquivos Brasileiros de Oftalmologia" between January 1993 and December 2002 were reviewed by two independent reviewers and classified according to the applied study design, statistical analysis and approval by the institutional review offices. To categorize those variables, a descriptive statistical analysis was used. After applying inclusion and exclusion criteria, 584 articles for evaluation of statistical analysis and, 725 articles for evaluation of study design were reviewed. Contingency table (23.10%) was the most frequently applied statistical method, followed by non-parametric tests (18.19%), Student's t test (12.65%), central tendency measures (10.60%) and analysis of variance (9.81%). Of 584 reviewed articles, 291 (49.82%) presented no statistical analysis. Observational case series (26.48%) was the most frequently used type of study design, followed by interventional case series (18.48%), observational case description (13.37%), non-random clinical study (8.96%) and experimental study (8.55%). We found a higher frequency of observational clinical studies, lack of statistical analysis in almost half of the published papers. Increase in studies with approval by institutional review Ethics Committee was noted since it became mandatory in 1996.

  10. Modelling short time series in metabolomics: a functional data analysis approach.

    PubMed

    Montana, Giovanni; Berk, Maurice; Ebbels, Tim

    2011-01-01

    Metabolomics is the study of the complement of small molecule metabolites in cells, biofluids and tissues. Many metabolomic experiments are designed to compare changes observed over time under two or more experimental conditions (e.g. a control and drug-treated group), thus producing time course data. Models from traditional time series analysis are often unsuitable because, by design, only very few time points are available and there are a high number of missing values. We propose a functional data analysis approach for modelling short time series arising in metabolomic studies which overcomes these obstacles. Our model assumes that each observed time series is a smooth random curve, and we propose a statistical approach for inferring this curve from repeated measurements taken on the experimental units. A test statistic for detecting differences between temporal profiles associated with two experimental conditions is then presented. The methodology has been applied to NMR spectroscopy data collected in a pre-clinical toxicology study.

  11. Primer of statistics in dental research: part I.

    PubMed

    Shintani, Ayumi

    2014-01-01

    Statistics play essential roles in evidence-based dentistry (EBD) practice and research. It ranges widely from formulating scientific questions, designing studies, collecting and analyzing data to interpreting, reporting, and presenting study findings. Mastering statistical concepts appears to be an unreachable goal among many dental researchers in part due to statistical authorities' limitations of explaining statistical principles to health researchers without elaborating complex mathematical concepts. This series of 2 articles aim to introduce dental researchers to 9 essential topics in statistics to conduct EBD with intuitive examples. The part I of the series includes the first 5 topics (1) statistical graph, (2) how to deal with outliers, (3) p-value and confidence interval, (4) testing equivalence, and (5) multiplicity adjustment. Part II will follow to cover the remaining topics including (6) selecting the proper statistical tests, (7) repeated measures analysis, (8) epidemiological consideration for causal association, and (9) analysis of agreement. Copyright © 2014. Published by Elsevier Ltd.

  12. A scan statistic for identifying optimal risk windows in vaccine safety studies using self-controlled case series design.

    PubMed

    Xu, Stanley; Hambidge, Simon J; McClure, David L; Daley, Matthew F; Glanz, Jason M

    2013-08-30

    In the examination of the association between vaccines and rare adverse events after vaccination in postlicensure observational studies, it is challenging to define appropriate risk windows because prelicensure RCTs provide little insight on the timing of specific adverse events. Past vaccine safety studies have often used prespecified risk windows based on prior publications, biological understanding of the vaccine, and expert opinion. Recently, a data-driven approach was developed to identify appropriate risk windows for vaccine safety studies that use the self-controlled case series design. This approach employs both the maximum incidence rate ratio and the linear relation between the estimated incidence rate ratio and the inverse of average person time at risk, given a specified risk window. In this paper, we present a scan statistic that can identify appropriate risk windows in vaccine safety studies using the self-controlled case series design while taking into account the dependence of time intervals within an individual and while adjusting for time-varying covariates such as age and seasonality. This approach uses the maximum likelihood ratio test based on fixed-effects models, which has been used for analyzing data from self-controlled case series design in addition to conditional Poisson models. Copyright © 2013 John Wiley & Sons, Ltd.

  13. The High Cost of Complexity in Experimental Design and Data Analysis: Type I and Type II Error Rates in Multiway ANOVA.

    ERIC Educational Resources Information Center

    Smith, Rachel A.; Levine, Timothy R.; Lachlan, Kenneth A.; Fediuk, Thomas A.

    2002-01-01

    Notes that the availability of statistical software packages has led to a sharp increase in use of complex research designs and complex statistical analyses in communication research. Reports a series of Monte Carlo simulations which demonstrate that this complexity may come at a heavier cost than many communication researchers realize. Warns…

  14. Statistical Engineering in Air Traffic Management Research

    NASA Technical Reports Server (NTRS)

    Wilson, Sara R.

    2015-01-01

    NASA is working to develop an integrated set of advanced technologies to enable efficient arrival operations in high-density terminal airspace for the Next Generation Air Transportation System. This integrated arrival solution is being validated and verified in laboratories and transitioned to a field prototype for an operational demonstration at a major U.S. airport. Within NASA, this is a collaborative effort between Ames and Langley Research Centers involving a multi-year iterative experimentation process. Designing and analyzing a series of sequential batch computer simulations and human-in-the-loop experiments across multiple facilities and simulation environments involves a number of statistical challenges. Experiments conducted in separate laboratories typically have different limitations and constraints, and can take different approaches with respect to the fundamental principles of statistical design of experiments. This often makes it difficult to compare results from multiple experiments and incorporate findings into the next experiment in the series. A statistical engineering approach is being employed within this project to support risk-informed decision making and maximize the knowledge gained within the available resources. This presentation describes a statistical engineering case study from NASA, highlights statistical challenges, and discusses areas where existing statistical methodology is adapted and extended.

  15. Evaluating the Impact of Database Heterogeneity on Observational Study Results

    PubMed Central

    Madigan, David; Ryan, Patrick B.; Schuemie, Martijn; Stang, Paul E.; Overhage, J. Marc; Hartzema, Abraham G.; Suchard, Marc A.; DuMouchel, William; Berlin, Jesse A.

    2013-01-01

    Clinical studies that use observational databases to evaluate the effects of medical products have become commonplace. Such studies begin by selecting a particular database, a decision that published papers invariably report but do not discuss. Studies of the same issue in different databases, however, can and do generate different results, sometimes with strikingly different clinical implications. In this paper, we systematically study heterogeneity among databases, holding other study methods constant, by exploring relative risk estimates for 53 drug-outcome pairs and 2 widely used study designs (cohort studies and self-controlled case series) across 10 observational databases. When holding the study design constant, our analysis shows that estimated relative risks range from a statistically significant decreased risk to a statistically significant increased risk in 11 of 53 (21%) of drug-outcome pairs that use a cohort design and 19 of 53 (36%) of drug-outcome pairs that use a self-controlled case series design. This exceeds the proportion of pairs that were consistent across databases in both direction and statistical significance, which was 9 of 53 (17%) for cohort studies and 5 of 53 (9%) for self-controlled case series. Our findings show that clinical studies that use observational databases can be sensitive to the choice of database. More attention is needed to consider how the choice of data source may be affecting results. PMID:23648805

  16. Statistics and Epidemiology of Lead Poisoning (FY 72-L1).

    ERIC Educational Resources Information Center

    Morrison, John H., Jr.; And Others

    This report is the first in a quarterly series which will contain statistics and epidemiologic notes on lead poisoning at both the national and local levels. This report contains (a) statistics on childhood lead poisoning; (b) a status report on the Community Lead Poisoning Data System, which was designed to assist local lead control programs and…

  17. A systematic review on the use of time series data in the study of antimicrobial consumption and Pseudomonas aeruginosa resistance.

    PubMed

    Athanasiou, Christos I; Kopsini, Angeliki

    2018-06-12

    In the field of antimicrobial resistance, the number of studies that use time series data has increased recently. The purpose of this study is the systematic review of all studies on antibacterial consumption and on Pseudomonas aeruginosa resistance in healthcare settings, that have used time series data. A systematic review of the literature till June 2017 was conducted. All the studies that have used time series data and have examined the inhospital antibiotic consumption and Ps. aeruginosa resistance rates or incidence were eligible. No other exclusion criteria were applied. Data on the structure, terminology used, methods used and results of each article were recorded and analyzed as possible. A total of thirty six studies were retrieved, twenty three of which were in accordance with our criteria. Thirteen of them were quasi experimental studies and ten were ecological observational studies. Eighteen studies collected time series data of both parameters and the statistical methodology of "time series analysis" was applied in nine studies. Most of the studies were published in the last eight years. The Interrupted Time Series design was the most widespread. As expected, there was high heterogeneity in regard to the study design, terminology and statistical methods applied. Copyright © 2018. Published by Elsevier Ltd.

  18. The Timeseries Toolbox - A Web Application to Enable Accessible, Reproducible Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Veatch, W.; Friedman, D.; Baker, B.; Mueller, C.

    2017-12-01

    The vast majority of data analyzed by climate researchers are repeated observations of physical process or time series data. This data lends itself of a common set of statistical techniques and models designed to determine trends and variability (e.g., seasonality) of these repeated observations. Often, these same techniques and models can be applied to a wide variety of different time series data. The Timeseries Toolbox is a web application designed to standardize and streamline these common approaches to time series analysis and modeling with particular attention to hydrologic time series used in climate preparedness and resilience planning and design by the U. S. Army Corps of Engineers. The application performs much of the pre-processing of time series data necessary for more complex techniques (e.g. interpolation, aggregation). With this tool, users can upload any dataset that conforms to a standard template and immediately begin applying these techniques to analyze their time series data.

  19. Identifying Autocorrelation Generated by Various Error Processes in Interrupted Time-Series Regression Designs: A Comparison of AR1 and Portmanteau Tests

    ERIC Educational Resources Information Center

    Huitema, Bradley E.; McKean, Joseph W.

    2007-01-01

    Regression models used in the analysis of interrupted time-series designs assume statistically independent errors. Four methods of evaluating this assumption are the Durbin-Watson (D-W), Huitema-McKean (H-M), Box-Pierce (B-P), and Ljung-Box (L-B) tests. These tests were compared with respect to Type I error and power under a wide variety of error…

  20. Testing and Evaluating C3I Systems That Employ AI. Volume 1. Handbook for Testing Expert Systems

    DTIC Science & Technology

    1991-01-31

    Designs ....... ............. .. 6-29 Nonequivalent Control Group Design ...does not receive the system; and (c) nonequivalent (and nonrandomized) control group designs that rely on statistical techniques like analysis of...implementation); (b) multiple time-series designs using a control group ; and (c) nonequivalent control group designs that obtain pretest and

  1. Secondary Statistical Modeling with the National Assessment of Adult Literacy: Implications for the Design of the Background Questionnaire. Working Paper Series.

    ERIC Educational Resources Information Center

    Kaplan, David

    This paper offers recommendations to the National Center for Education Statistics (NCES) on the development of the background questionnaire for the National Assessment of Adult Literacy (NAAL). The recommendations are from the viewpoint of a researcher interested in applying sophisticated statistical models to address important issues in adult…

  2. Time series, periodograms, and significance

    NASA Astrophysics Data System (ADS)

    Hernandez, G.

    1999-05-01

    The geophysical literature shows a wide and conflicting usage of methods employed to extract meaningful information on coherent oscillations from measurements. This makes it difficult, if not impossible, to relate the findings reported by different authors. Therefore, we have undertaken a critical investigation of the tests and methodology used for determining the presence of statistically significant coherent oscillations in periodograms derived from time series. Statistical significance tests are only valid when performed on the independent frequencies present in a measurement. Both the number of possible independent frequencies in a periodogram and the significance tests are determined by the number of degrees of freedom, which is the number of true independent measurements, present in the time series, rather than the number of sample points in the measurement. The number of degrees of freedom is an intrinsic property of the data, and it must be determined from the serial coherence of the time series. As part of this investigation, a detailed study has been performed which clearly illustrates the deleterious effects that the apparently innocent and commonly used processes of filtering, de-trending, and tapering of data have on periodogram analysis and the consequent difficulties in the interpretation of the statistical significance thus derived. For the sake of clarity, a specific example of actual field measurements containing unevenly-spaced measurements, gaps, etc., as well as synthetic examples, have been used to illustrate the periodogram approach, and pitfalls, leading to the (statistical) significance tests for the presence of coherent oscillations. Among the insights of this investigation are: (1) the concept of a time series being (statistically) band limited by its own serial coherence and thus having a critical sampling rate which defines one of the necessary requirements for the proper statistical design of an experiment; (2) the design of a critical test for the maximum number of significant frequencies which can be used to describe a time series, while retaining intact the variance of the test sample; (3) a demonstration of the unnecessary difficulties that manipulation of the data brings into the statistical significance interpretation of said data; and (4) the resolution and correction of the apparent discrepancy in significance results obtained by the use of the conventional Lomb-Scargle significance test, when compared with the long-standing Schuster-Walker and Fisher tests.

  3. Statistical process control in nursing research.

    PubMed

    Polit, Denise F; Chaboyer, Wendy

    2012-02-01

    In intervention studies in which randomization to groups is not possible, researchers typically use quasi-experimental designs. Time series designs are strong quasi-experimental designs but are seldom used, perhaps because of technical and analytic hurdles. Statistical process control (SPC) is an alternative analytic approach to testing hypotheses about intervention effects using data collected over time. SPC, like traditional statistical methods, is a tool for understanding variation and involves the construction of control charts that distinguish between normal, random fluctuations (common cause variation), and statistically significant special cause variation that can result from an innovation. The purpose of this article is to provide an overview of SPC and to illustrate its use in a study of a nursing practice improvement intervention. Copyright © 2011 Wiley Periodicals, Inc.

  4. Proceedings of the Conference on the Design of Experiments (23rd) S

    DTIC Science & Technology

    1978-07-01

    of Statistics, Carnegie-Mellon University. * [12] Duran , B. S . (1976). A survey of nonparametric tests for scale. Comunications in Statistics A5, 1287...the twenty-third Design of Experiments Conference was the U. S . Army Combat Development Experimentation Command, Fort Ord, California. Excellent...Availability Prof. G. E. P. Box Time Series Modelling University of Wisconsin Dr. Churchill Eisenhart was recipient this year of the Samuel S . Wilks Memorial

  5. Quasi-experimental Studies in the Fields of Infection Control and Antibiotic Resistance, Ten Years Later: A Systematic Review.

    PubMed

    Alsaggaf, Rotana; O'Hara, Lyndsay M; Stafford, Kristen A; Leekha, Surbhi; Harris, Anthony D

    2018-02-01

    OBJECTIVE A systematic review of quasi-experimental studies in the field of infectious diseases was published in 2005. The aim of this study was to assess improvements in the design and reporting of quasi-experiments 10 years after the initial review. We also aimed to report the statistical methods used to analyze quasi-experimental data. DESIGN Systematic review of articles published from January 1, 2013, to December 31, 2014, in 4 major infectious disease journals. METHODS Quasi-experimental studies focused on infection control and antibiotic resistance were identified and classified based on 4 criteria: (1) type of quasi-experimental design used, (2) justification of the use of the design, (3) use of correct nomenclature to describe the design, and (4) statistical methods used. RESULTS Of 2,600 articles, 173 (7%) featured a quasi-experimental design, compared to 73 of 2,320 articles (3%) in the previous review (P<.01). Moreover, 21 articles (12%) utilized a study design with a control group; 6 (3.5%) justified the use of a quasi-experimental design; and 68 (39%) identified their design using the correct nomenclature. In addition, 2-group statistical tests were used in 75 studies (43%); 58 studies (34%) used standard regression analysis; 18 (10%) used segmented regression analysis; 7 (4%) used standard time-series analysis; 5 (3%) used segmented time-series analysis; and 10 (6%) did not utilize statistical methods for comparisons. CONCLUSIONS While some progress occurred over the decade, it is crucial to continue improving the design and reporting of quasi-experimental studies in the fields of infection control and antibiotic resistance to better evaluate the effectiveness of important interventions. Infect Control Hosp Epidemiol 2018;39:170-176.

  6. Associations of daily pediatric asthma emergency department visits with air pollution in Newark, NJ: utilizing time-series and case-crossover study designs.

    PubMed

    Gleason, Jessie A; Fagliano, Jerald A

    2015-10-01

    Asthma is one of the most common chronic diseases affecting children. This study assesses the associations of ozone and fine particulate matter (PM2.5) with pediatric emergency department visits in the urban environment of Newark, NJ. Two study designs were utilized and evaluated for usability. We obtained daily emergency department visits among children aged 3-17 years with a primary diagnosis of asthma during April to September for 2004-2007. Both a time-stratified case-crossover study design with bi-directional control sampling and a time-series study design were utilized. Lagged effects (1-d through 5-d lag, 3-d average, and 5-d average) of ozone and PM2.5 were explored and a dose-response analysis comparing the bottom 5th percentile of 3-d average lag ozone with each 5 percentile increase was performed. Associations of interquartile range increase in same-day ozone were similar between the time-series and case-crossover study designs (RR = 1.08, 95% CI 1.04-1.12) and (OR = 1.10, 95% CI 1.06-1.14), respectively. Similar associations were seen for 1-day lag and 3-day average lag ozone levels. PM2.5 was not associated with the outcome in either study design. Dose-response assessment indicated a statistically significant and increasing association around 50-55 ppb consistent for both study designs. Ozone was statistically positively associated with pediatric asthma ED visits in Newark, NJ. Our results were generally comparable across the time-series and case-crossover study designs, indicating both are useful to assess local air pollution impacts.

  7. Computer-Based Instruction in Statistical Inference; Final Report. Technical Memorandum (TM Series).

    ERIC Educational Resources Information Center

    Rosenbaum, J.; And Others

    A two-year investigation into the development of computer-assisted instruction (CAI) for the improvement of undergraduate training in statistics was undertaken. The first year was largely devoted to designing PLANIT (Programming LANguage for Interactive Teaching) which reduces, or completely eliminates, the need an author of CAI lessons would…

  8. Project T.E.A.M. (Technical Education Advancement Modules). Advanced Statistical Process Control.

    ERIC Educational Resources Information Center

    Dunlap, Dale

    This instructional guide, one of a series developed by the Technical Education Advancement Modules (TEAM) project, is a 20-hour advanced statistical process control (SPC) and quality improvement course designed to develop the following competencies: (1) understanding quality systems; (2) knowing the process; (3) solving quality problems; and (4)…

  9. Evaluation of the best fit distribution for partial duration series of daily rainfall in Madinah, western Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Alahmadi, F.; Rahman, N. A.; Abdulrazzak, M.

    2014-09-01

    Rainfall frequency analysis is an essential tool for the design of water related infrastructure. It can be used to predict future flood magnitudes for a given magnitude and frequency of extreme rainfall events. This study analyses the application of rainfall partial duration series (PDS) in the vast growing urban Madinah city located in the western part of Saudi Arabia. Different statistical distributions were applied (i.e. Normal, Log Normal, Extreme Value type I, Generalized Extreme Value, Pearson Type III, Log Pearson Type III) and their distribution parameters were estimated using L-moments methods. Also, different selection criteria models are applied, e.g. Akaike Information Criterion (AIC), Corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC) and Anderson-Darling Criterion (ADC). The analysis indicated the advantage of Generalized Extreme Value as the best fit statistical distribution for Madinah partial duration daily rainfall series. The outcome of such an evaluation can contribute toward better design criteria for flood management, especially flood protection measures.

  10. Selected Oral Health Indicators in the United States, 2005-2008

    MedlinePlus

    ... errors of the percentages were estimated using Taylor series linearization, to take into account the complex sampling design. The statistical significance of differences between estimates were ...

  11. Use of Services for Family Planning and Infertility, United States, 1982. (Data from the National Survey of Family Growth, Series 23, No. 13).

    ERIC Educational Resources Information Center

    Horn, Marjorie C.; Mosher, William D.

    1986-01-01

    The National Survey of Family Growth is a periodic survey conducted by the National Center for Health Statistics, and designed to produce national estimates of statistics on fertility, family planning, and aspects of maternal and child health that are closely related to childbearing. This report presents statistics based on data collected in the…

  12. Research Designs for Intervention Research with Small Samples II: Stepped Wedge and Interrupted Time-Series Designs.

    PubMed

    Fok, Carlotta Ching Ting; Henry, David; Allen, James

    2015-10-01

    The stepped wedge design (SWD) and the interrupted time-series design (ITSD) are two alternative research designs that maximize efficiency and statistical power with small samples when contrasted to the operating characteristics of conventional randomized controlled trials (RCT). This paper provides an overview and introduction to previous work with these designs and compares and contrasts them with the dynamic wait-list design (DWLD) and the regression point displacement design (RPDD), which were presented in a previous article (Wyman, Henry, Knoblauch, and Brown, Prevention Science. 2015) in this special section. The SWD and the DWLD are similar in that both are intervention implementation roll-out designs. We discuss similarities and differences between the SWD and DWLD in their historical origin and application, along with differences in the statistical modeling of each design. Next, we describe the main design characteristics of the ITSD, along with some of its strengths and limitations. We provide a critical comparative review of strengths and weaknesses in application of the ITSD, SWD, DWLD, and RPDD as small sample alternatives to application of the RCT, concluding with a discussion of the types of contextual factors that influence selection of an optimal research design by prevention researchers working with small samples.

  13. Research Designs for Intervention Research with Small Samples II: Stepped Wedge and Interrupted Time-Series Designs

    PubMed Central

    Ting Fok, Carlotta Ching; Henry, David; Allen, James

    2015-01-01

    The stepped wedge design (SWD) and the interrupted time-series design (ITSD) are two alternative research designs that maximize efficiency and statistical power with small samples when contrasted to the operating characteristics of conventional randomized controlled trials (RCT). This paper provides an overview and introduction to previous work with these designs, and compares and contrasts them with the dynamic wait-list design (DWLD) and the regression point displacement design (RPDD), which were presented in a previous article (Wyman, Henry, Knoblauch, and Brown, 2015) in this Special Section. The SWD and the DWLD are similar in that both are intervention implementation roll-out designs. We discuss similarities and differences between the SWD and DWLD in their historical origin and application, along with differences in the statistical modeling of each design. Next, we describe the main design characteristics of the ITSD, along with some of its strengths and limitations. We provide a critical comparative review of strengths and weaknesses in application of the ITSD, SWD, DWLD, and RPDD as small samples alternatives to application of the RCT, concluding with a discussion of the types of contextual factors that influence selection of an optimal research design by prevention researchers working with small samples. PMID:26017633

  14. Modeling Longitudinal Data with Generalized Additive Models: Applications to Single-Case Designs

    ERIC Educational Resources Information Center

    Sullivan, Kristynn J.; Shadish, William R.

    2013-01-01

    Single case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time both in the presence and absence of treatment. For a variety of reasons, interest in the statistical analysis and meta-analysis of these designs has been growing in recent years. This paper proposes modeling SCD data with…

  15. Evaluating the quality of a cell counting measurement process via a dilution series experimental design.

    PubMed

    Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng

    2017-12-01

    Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.

  16. America's Black Population: 1970 to 1982. A Statistical View. Special Publication PIO/POP-83-1.

    ERIC Educational Resources Information Center

    Matney, William C.; Johnson, Dwight L.

    This pamphlet is the first in a series designed to provide a compilation of selected demographic, social, economic, and other statistical data relating to selected populations. Topics covered here (in both discussion and table/graph format) include Black population growth and distribution, residence, income gain, poverty rate increase, labor force…

  17. 29 CFR 1607.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... in the design of the study and their effects identified. (5) Statistical relationships. The degree of...; or such factors should be included in the design of the study and their effects identified. (f... arduous effort involving a series of research studies, which include criterion related validity studies...

  18. 29 CFR 1607.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... in the design of the study and their effects identified. (5) Statistical relationships. The degree of...; or such factors should be included in the design of the study and their effects identified. (f... arduous effort involving a series of research studies, which include criterion related validity studies...

  19. 29 CFR 1607.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... in the design of the study and their effects identified. (5) Statistical relationships. The degree of...; or such factors should be included in the design of the study and their effects identified. (f... arduous effort involving a series of research studies, which include criterion related validity studies...

  20. 29 CFR 1607.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... in the design of the study and their effects identified. (5) Statistical relationships. The degree of...; or such factors should be included in the design of the study and their effects identified. (f... arduous effort involving a series of research studies, which include criterion related validity studies...

  1. 29 CFR 1607.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... in the design of the study and their effects identified. (5) Statistical relationships. The degree of...; or such factors should be included in the design of the study and their effects identified. (f... arduous effort involving a series of research studies, which include criterion related validity studies...

  2. Women of the World: A Chartbook for Developing Regions.

    ERIC Educational Resources Information Center

    Jamison, Ellen

    The fifth in a series of five handbooks designed to present and analyze statistical data on women, this report presents a series of captioned charts describing the situation of women in Latin America and the Caribbean, Sub-Saharan Africa, the Near East and North Africa, and Asia and the Pacific. The original sources of data are the censuses and…

  3. Computational Analysis for Rocket-Based Combined-Cycle Systems During Rocket-Only Operation

    NASA Technical Reports Server (NTRS)

    Steffen, C. J., Jr.; Smith, T. D.; Yungster, S.; Keller, D. J.

    2000-01-01

    A series of Reynolds-averaged Navier-Stokes calculations were employed to study the performance of rocket-based combined-cycle systems operating in an all-rocket mode. This parametric series of calculations were executed within a statistical framework, commonly known as design of experiments. The parametric design space included four geometric and two flowfield variables set at three levels each, for a total of 729 possible combinations. A D-optimal design strategy was selected. It required that only 36 separate computational fluid dynamics (CFD) solutions be performed to develop a full response surface model, which quantified the linear, bilinear, and curvilinear effects of the six experimental variables. The axisymmetric, Reynolds-averaged Navier-Stokes simulations were executed with the NPARC v3.0 code. The response used in the statistical analysis was created from Isp efficiency data integrated from the 36 CFD simulations. The influence of turbulence modeling was analyzed by using both one- and two-equation models. Careful attention was also given to quantify the influence of mesh dependence, iterative convergence, and artificial viscosity upon the resulting statistical model. Thirteen statistically significant effects were observed to have an influence on rocket-based combined-cycle nozzle performance. It was apparent that the free-expansion process, directly downstream of the rocket nozzle, can influence the Isp efficiency. Numerical schlieren images and particle traces have been used to further understand the physical phenomena behind several of the statistically significant results.

  4. Detection of Undocumented Changepoints Using Multiple Test Statistics and Composite Reference Series.

    NASA Astrophysics Data System (ADS)

    Menne, Matthew J.; Williams, Claude N., Jr.

    2005-10-01

    An evaluation of three hypothesis test statistics that are commonly used in the detection of undocumented changepoints is described. The goal of the evaluation was to determine whether the use of multiple tests could improve undocumented, artificial changepoint detection skill in climate series. The use of successive hypothesis testing is compared to optimal approaches, both of which are designed for situations in which multiple undocumented changepoints may be present. In addition, the importance of the form of the composite climate reference series is evaluated, particularly with regard to the impact of undocumented changepoints in the various component series that are used to calculate the composite.In a comparison of single test changepoint detection skill, the composite reference series formulation is shown to be less important than the choice of the hypothesis test statistic, provided that the composite is calculated from the serially complete and homogeneous component series. However, each of the evaluated composite series is not equally susceptible to the presence of changepoints in its components, which may be erroneously attributed to the target series. Moreover, a reference formulation that is based on the averaging of the first-difference component series is susceptible to random walks when the composition of the component series changes through time (e.g., values are missing), and its use is, therefore, not recommended. When more than one test is required to reject the null hypothesis of no changepoint, the number of detected changepoints is reduced proportionately less than the number of false alarms in a wide variety of Monte Carlo simulations. Consequently, a consensus of hypothesis tests appears to improve undocumented changepoint detection skill, especially when reference series homogeneity is violated. A consensus of successive hypothesis tests using a semihierarchic splitting algorithm also compares favorably to optimal solutions, even when changepoints are not hierarchic.

  5. Project T.E.A.M. (Technical Education Advancement Modules). Introduction to Statistical Process Control.

    ERIC Educational Resources Information Center

    Billings, Paul H.

    This instructional guide, one of a series developed by the Technical Education Advancement Modules (TEAM) project, is a 6-hour introductory module on statistical process control (SPC), designed to develop competencies in the following skill areas: (1) identification of the three classes of SPC use; (2) understanding a process and how it works; (3)…

  6. An astronomer's guide to period searching

    NASA Astrophysics Data System (ADS)

    Schwarzenberg-Czerny, A.

    2003-03-01

    We concentrate on analysis of unevenly sampled time series, interrupted by periodic gaps, as often encountered in astronomy. While some of our conclusions may appear surprising, all are based on classical statistical principles of Fisher & successors. Except for discussion of the resolution issues, it is best for the reader to forget temporarily about Fourier transforms and to concentrate on problems of fitting of a time series with a model curve. According to their statistical content we divide the issues into several sections, consisting of: (ii) statistical numerical aspects of model fitting, (iii) evaluation of fitted models as hypotheses testing, (iv) the role of the orthogonal models in signal detection (v) conditions for equivalence of periodograms (vi) rating sensitivity by test power. An experienced observer working with individual objects would benefit little from formalized statistical approach. However, we demonstrate the usefulness of this approach in evaluation of performance of periodograms and in quantitative design of large variability surveys.

  7. Interrupted time series regression for the evaluation of public health interventions: a tutorial.

    PubMed

    Bernal, James Lopez; Cummins, Steven; Gasparrini, Antonio

    2017-02-01

    Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in time. It is increasingly being used to evaluate the effectiveness of interventions ranging from clinical therapy to national public health legislation. Whereas the design shares many properties of regression-based approaches in other epidemiological studies, there are a range of unique features of time series data that require additional methodological considerations. In this tutorial we use a worked example to demonstrate a robust approach to ITS analysis using segmented regression. We begin by describing the design and considering when ITS is an appropriate design choice. We then discuss the essential, yet often omitted, step of proposing the impact model a priori. Subsequently, we demonstrate the approach to statistical analysis including the main segmented regression model. Finally we describe the main methodological issues associated with ITS analysis: over-dispersion of time series data, autocorrelation, adjusting for seasonal trends and controlling for time-varying confounders, and we also outline some of the more complex design adaptations that can be used to strengthen the basic ITS design.

  8. Interrupted time series regression for the evaluation of public health interventions: a tutorial

    PubMed Central

    Bernal, James Lopez; Cummins, Steven; Gasparrini, Antonio

    2017-01-01

    Abstract Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in time. It is increasingly being used to evaluate the effectiveness of interventions ranging from clinical therapy to national public health legislation. Whereas the design shares many properties of regression-based approaches in other epidemiological studies, there are a range of unique features of time series data that require additional methodological considerations. In this tutorial we use a worked example to demonstrate a robust approach to ITS analysis using segmented regression. We begin by describing the design and considering when ITS is an appropriate design choice. We then discuss the essential, yet often omitted, step of proposing the impact model a priori. Subsequently, we demonstrate the approach to statistical analysis including the main segmented regression model. Finally we describe the main methodological issues associated with ITS analysis: over-dispersion of time series data, autocorrelation, adjusting for seasonal trends and controlling for time-varying confounders, and we also outline some of the more complex design adaptations that can be used to strengthen the basic ITS design. PMID:27283160

  9. On Instructional Utility, Statistical Methodology, and the Added Value of ECD: Lessons Learned from the Special Issue

    ERIC Educational Resources Information Center

    Nelson, Brian; Nugent, Rebecca; Rupp, Andre A.

    2012-01-01

    This special issue of "JEDM" was dedicated to bridging work done in the disciplines of "educational and psychological assessment" and "educational data mining" (EDM) via the assessment design and implementation framework of "evidence-centered design" (ECD). It consisted of a series of five papers: one…

  10. Characterizing Variability in Smestad and Gratzel's Nanocrystalline Solar Cells: A Collaborative Learning Experience in Experimental Design

    ERIC Educational Resources Information Center

    Lawson, John; Aggarwal, Pankaj; Leininger, Thomas; Fairchild, Kenneth

    2011-01-01

    This article describes a collaborative learning experience in experimental design that closely approximates what practicing statisticians and researchers in applied science experience during consulting. Statistics majors worked with a teaching assistant from the chemistry department to conduct a series of experiments characterizing the variation…

  11. WEC Design Response Toolbox v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coe, Ryan; Michelen, Carlos; Eckert-Gallup, Aubrey

    2016-03-30

    The WEC Design Response Toolbox (WDRT) is a numerical toolbox for design-response analysis of wave energy converters (WECs). The WDRT was developed during a series of efforts to better understand WEC survival design. The WDRT has been designed as a tool for researchers and developers, enabling the straightforward application of statistical and engineering methods. The toolbox includes methods for short-term extreme response, environmental characterization, long-term extreme response and risk analysis, fatigue, and design wave composition.

  12. Total Quality Management: Statistics and Graphics III - Experimental Design and Taguchi Methods. AIR 1993 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Schwabe, Robert A.

    Interest in Total Quality Management (TQM) at institutions of higher education has been stressed in recent years as an important area of activity for institutional researchers. Two previous AIR Forum papers have presented some of the statistical and graphical methods used for TQM. This paper, the third in the series, first discusses some of the…

  13. Science Library of Test Items. Volume Eight. Mastery Testing Program. Series 3 & 4 Supplements to Introduction and Manual.

    ERIC Educational Resources Information Center

    New South Wales Dept. of Education, Sydney (Australia).

    Continuing a series of short tests aimed at measuring student mastery of specific skills in the natural sciences, this supplementary volume includes teachers' notes, a users' guide and inspection copies of test items 27 to 50. Answer keys and test scoring statistics are provided. The items are designed for grades 7 through 10, and a list of the…

  14. Aerial Refueling Simulator Validation Using Operational Experimentation and Response Surface Methods with Time Series Responses

    DTIC Science & Technology

    2013-03-21

    10 2.3 Time Series Response Data ................................................................................. 12 2.4 Comparison of Response...to 12 evaluating the efficiency of the parameter estimates. In the past, the most popular form of response surface design used the D-optimality...as well. A model can refer to almost anything in math , statistics, or computer science. It can be any “physical, mathematical, or logical

  15. Evaluation of the impact of fetal fibronectin test implementation on hospital admissions for preterm labour in Ontario: a multiple baseline time-series design.

    PubMed

    Fell, D B; Sprague, A E; Grimshaw, J M; Yasseen, A S; Coyle, D; Dunn, S I; Perkins, S L; Peterson, W E; Johnson, M; Bunting, P S; Walker, M C

    2014-03-01

    To determine the impact of a health system-wide fetal fibronectin (fFN) testing programme on the rates of hospital admission for preterm labour (PTL). Multiple baseline time-series design. Canadian province of Ontario. A retrospective population-based cohort of antepartum and delivered obstetrical admissions in all Ontario hospitals between 1 April 2002 and 31 March 2010. International Classification of Diseases codes in a health system-wide hospital administrative database were used to identify the study population and define the outcome measure. An aggregate time series of monthly rates of hospital admissions for PTL was analysed using segmented regression models after aligning the fFN test implementation date for each institution. Rate of obstetrical hospital admission for PTL. Estimated rates of hospital admission for PTL following fFN implementation were lower than predicted had pre-implementation trends prevailed. The reduction in the rate was modest, but statistically significant, when estimated at 12 months following fFN implementation (-0.96 hospital admissions for PTL per 100 preterm births; 95% confidence interval [CI], -1.02 to -0.90, P = 0.04). The statistically significant reduction was sustained at 24 and 36 months following implementation. Using a robust quasi-experimental study design to overcome confounding as a result of underlying secular trends or concurrent interventions, we found evidence of a small but statistically significant reduction in the health system-level rate of hospital admissions for PTL following implementation of fFN testing in a large Canadian province. © 2013 Royal College of Obstetricians and Gynaecologists.

  16. DESIGN OF EXPOSURE MEASUREMENTS FOR EPIDEMIOLOGIC STUDIES

    EPA Science Inventory

    This presentation will describe the following items: (1) London daily air pollution and deaths that demonstrate how time series epidemiology can indicate that air pollution caused death; (2) Sophisticated statistical models required to establish this relationship for lower pollut...

  17. Extended local similarity analysis (eLSA) of microbial community and other time series data with replicates.

    PubMed

    Xia, Li C; Steele, Joshua A; Cram, Jacob A; Cardon, Zoe G; Simmons, Sheri L; Vallino, Joseph J; Fuhrman, Jed A; Sun, Fengzhu

    2011-01-01

    The increasing availability of time series microbial community data from metagenomics and other molecular biological studies has enabled the analysis of large-scale microbial co-occurrence and association networks. Among the many analytical techniques available, the Local Similarity Analysis (LSA) method is unique in that it captures local and potentially time-delayed co-occurrence and association patterns in time series data that cannot otherwise be identified by ordinary correlation analysis. However LSA, as originally developed, does not consider time series data with replicates, which hinders the full exploitation of available information. With replicates, it is possible to understand the variability of local similarity (LS) score and to obtain its confidence interval. We extended our LSA technique to time series data with replicates and termed it extended LSA, or eLSA. Simulations showed the capability of eLSA to capture subinterval and time-delayed associations. We implemented the eLSA technique into an easy-to-use analytic software package. The software pipeline integrates data normalization, statistical correlation calculation, statistical significance evaluation, and association network construction steps. We applied the eLSA technique to microbial community and gene expression datasets, where unique time-dependent associations were identified. The extended LSA analysis technique was demonstrated to reveal statistically significant local and potentially time-delayed association patterns in replicated time series data beyond that of ordinary correlation analysis. These statistically significant associations can provide insights to the real dynamics of biological systems. The newly designed eLSA software efficiently streamlines the analysis and is freely available from the eLSA homepage, which can be accessed at http://meta.usc.edu/softs/lsa.

  18. Extended local similarity analysis (eLSA) of microbial community and other time series data with replicates

    PubMed Central

    2011-01-01

    Background The increasing availability of time series microbial community data from metagenomics and other molecular biological studies has enabled the analysis of large-scale microbial co-occurrence and association networks. Among the many analytical techniques available, the Local Similarity Analysis (LSA) method is unique in that it captures local and potentially time-delayed co-occurrence and association patterns in time series data that cannot otherwise be identified by ordinary correlation analysis. However LSA, as originally developed, does not consider time series data with replicates, which hinders the full exploitation of available information. With replicates, it is possible to understand the variability of local similarity (LS) score and to obtain its confidence interval. Results We extended our LSA technique to time series data with replicates and termed it extended LSA, or eLSA. Simulations showed the capability of eLSA to capture subinterval and time-delayed associations. We implemented the eLSA technique into an easy-to-use analytic software package. The software pipeline integrates data normalization, statistical correlation calculation, statistical significance evaluation, and association network construction steps. We applied the eLSA technique to microbial community and gene expression datasets, where unique time-dependent associations were identified. Conclusions The extended LSA analysis technique was demonstrated to reveal statistically significant local and potentially time-delayed association patterns in replicated time series data beyond that of ordinary correlation analysis. These statistically significant associations can provide insights to the real dynamics of biological systems. The newly designed eLSA software efficiently streamlines the analysis and is freely available from the eLSA homepage, which can be accessed at http://meta.usc.edu/softs/lsa. PMID:22784572

  19. Optimizing Use of Water Management Systems during Changes of Hydrological Conditions

    NASA Astrophysics Data System (ADS)

    Výleta, Roman; Škrinár, Andrej; Danáčová, Michaela; Valent, Peter

    2017-10-01

    When designing the water management systems and their components, there is a need of more detail research on hydrological conditions of the river basin, runoff of which creates the main source of water in the reservoir. Over the lifetime of the water management systems the hydrological time series are never repeated in the same form which served as the input for the design of the system components. The design assumes the observed time series to be representative at the time of the system use. However, it is rather unrealistic assumption, because the hydrological past will not be exactly repeated over the design lifetime. When designing the water management systems, the specialists may occasionally face the insufficient or oversized capacity design, possibly wrong specification of the management rules which may lead to their non-optimal use. It is therefore necessary to establish a comprehensive approach to simulate the fluctuations in the interannual runoff (taking into account the current dry and wet periods) in the form of stochastic modelling techniques in water management practice. The paper deals with the methodological procedure of modelling the mean monthly flows using the stochastic Thomas-Fiering model, while modification of this model by Wilson-Hilferty transformation of independent random number has been applied. This transformation usually applies in the event of significant asymmetry in the observed time series. The methodological procedure was applied on the data acquired at the gauging station of Horné Orešany in the Parná Stream. Observed mean monthly flows for the period of 1.11.1980 - 31.10.2012 served as the model input information. After extrapolation the model parameters and Wilson-Hilferty transformation parameters the synthetic time series of mean monthly flows were simulated. Those have been compared with the observed hydrological time series using basic statistical characteristics (e. g. mean, standard deviation and skewness) for testing the quality of the model simulation. The synthetic hydrological series of monthly flows were created having the same statistical properties as the time series observed in the past. The compiled model was able to take into account the diversity of extreme hydrological situations in a form of synthetic series of mean monthly flows, while the occurrence of a set of flows was confirmed, which could and may occur in the future. The results of stochastic modelling in the form of synthetic time series of mean monthly flows, which takes into account the seasonal fluctuations of runoff within the year, could be applicable in engineering hydrology (e. g. for optimum use of the existing water management system that is related to reassessment of economic risks of the system).

  20. Statistical analysis and application of quasi experiments to antimicrobial resistance intervention studies.

    PubMed

    Shardell, Michelle; Harris, Anthony D; El-Kamary, Samer S; Furuno, Jon P; Miller, Ram R; Perencevich, Eli N

    2007-10-01

    Quasi-experimental study designs are frequently used to assess interventions that aim to limit the emergence of antimicrobial-resistant pathogens. However, previous studies using these designs have often used suboptimal statistical methods, which may result in researchers making spurious conclusions. Methods used to analyze quasi-experimental data include 2-group tests, regression analysis, and time-series analysis, and they all have specific assumptions, data requirements, strengths, and limitations. An example of a hospital-based intervention to reduce methicillin-resistant Staphylococcus aureus infection rates and reduce overall length of stay is used to explore these methods.

  1. Fully Bayesian Estimation of Data from Single Case Designs

    ERIC Educational Resources Information Center

    Rindskopf, David

    2013-01-01

    Single case designs (SCDs) generally consist of a small number of short time series in two or more phases. The analysis of SCDs statistically fits in the framework of a multilevel model, or hierarchical model. The usual analysis does not take into account the uncertainty in the estimation of the random effects. This not only has an effect on the…

  2. Evaluating efficiency and statistical power of self-controlled case series and self-controlled risk interval designs in vaccine safety.

    PubMed

    Li, Rongxia; Stewart, Brock; Weintraub, Eric

    2016-01-01

    The self-controlled case series (SCCS) and self-controlled risk interval (SCRI) designs have recently become widely used in the field of post-licensure vaccine safety monitoring to detect potential elevated risks of adverse events following vaccinations. The SCRI design can be viewed as a subset of the SCCS method in that a reduced comparison time window is used for the analysis. Compared to the SCCS method, the SCRI design has less statistical power due to fewer events occurring in the shorter control interval. In this study, we derived the asymptotic relative efficiency (ARE) between these two methods to quantify this loss in power in the SCRI design. The equation is formulated as [Formula: see text] (a: control window-length ratio between SCRI and SCCS designs; b: ratio of risk window length and control window length in the SCCS design; and [Formula: see text]: relative risk of exposed window to control window). According to this equation, the relative efficiency declines as the ratio of control-period length between SCRI and SCCS methods decreases, or with an increase in the relative risk [Formula: see text]. We provide an example utilizing data from the Vaccine Safety Datalink (VSD) to study the potential elevated risk of febrile seizure following seasonal influenza vaccine in the 2010-2011 season.

  3. Optimized Design and Analysis of Sparse-Sampling fMRI Experiments

    PubMed Central

    Perrachione, Tyler K.; Ghosh, Satrajit S.

    2013-01-01

    Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase the number of samples and improve statistical power. PMID:23616742

  4. Optimized design and analysis of sparse-sampling FMRI experiments.

    PubMed

    Perrachione, Tyler K; Ghosh, Satrajit S

    2013-01-01

    Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase the number of samples and improve statistical power.

  5. Meta- and statistical analysis of single-case intervention research data: quantitative gifts and a wish list.

    PubMed

    Kratochwill, Thomas R; Levin, Joel R

    2014-04-01

    In this commentary, we add to the spirit of the articles appearing in the special series devoted to meta- and statistical analysis of single-case intervention-design data. Following a brief discussion of historical factors leading to our initial involvement in statistical analysis of such data, we discuss: (a) the value added by including statistical-analysis recommendations in the What Works Clearinghouse Standards for single-case intervention designs; (b) the importance of visual analysis in single-case intervention research, along with the distinctive role that could be played by single-case effect-size measures; and (c) the elevated internal validity and statistical-conclusion validity afforded by the incorporation of various forms of randomization into basic single-case design structures. For the future, we envision more widespread application of quantitative analyses, as critical adjuncts to visual analysis, in both primary single-case intervention research studies and literature reviews in the behavioral, educational, and health sciences. Copyright © 2014 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  6. How precise can atoms of a nanocluster be located in 3D using a tilt series of scanning transmission electron microscopy images?

    PubMed

    Alania, M; De Backer, A; Lobato, I; Krause, F F; Van Dyck, D; Rosenauer, A; Van Aert, S

    2017-10-01

    In this paper, we investigate how precise atoms of a small nanocluster can ultimately be located in three dimensions (3D) from a tilt series of images acquired using annular dark field (ADF) scanning transmission electron microscopy (STEM). Therefore, we derive an expression for the statistical precision with which the 3D atomic position coordinates can be estimated in a quantitative analysis. Evaluating this statistical precision as a function of the microscope settings also allows us to derive the optimal experimental design. In this manner, the optimal angular tilt range, required electron dose, optimal detector angles, and number of projection images can be determined. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. The microcomputer scientific software series 3: general linear model--analysis of variance.

    Treesearch

    Harold M. Rauscher

    1985-01-01

    A BASIC language set of programs, designed for use on microcomputers, is presented. This set of programs will perform the analysis of variance for any statistical model describing either balanced or unbalanced designs. The program computes and displays the degrees of freedom, Type I sum of squares, and the mean square for the overall model, the error, and each factor...

  8. Secondary School Mathematics Curriculum Improvement Study Information Bulletin 7.

    ERIC Educational Resources Information Center

    Secondary School Mathematics Curriculum Improvement Study, New York, NY.

    The background, objectives, and design of Secondary School Mathematics Curriculum Improvement Study (SSMCIS) are summarized. Details are given of the content of the text series, "Unified Modern Mathematics," in the areas of algebra, geometry, linear algebra, probability and statistics, analysis (calculus), logic, and computer…

  9. Two different flavours of complexity in financial data

    NASA Astrophysics Data System (ADS)

    Buonocore, R. J.; Musmeci, N.; Aste, T.; Matteo, T. Di

    2016-12-01

    We discuss two elements that define the complexity of financial time series: one is the multiscaling property, which is linked to how the statistics of a single time-series changes with the time horizon; the second is the structure of dependency between time-series, which accounts for the collective behaviour, i.e. the market structure. Financial time-series have statistical properties which change with the time horizon and the quantification of such multiscaling property has been successful to distinguish among different degrees of development of markets, monitor the stability of firms and estimate risk. The study of the structure of dependency between time-series with the use of information filtering graphs can reveal important insight on the market structure highlighting risks, stress and portfolio management strategies. In this contribution we highlight achievements, major successes and discuss major challenges and open problems in the study of these two elements of complexity, hoping to attract the interest of more researchers in this research area. We indeed believe that with the advent of the Big Data era, the need and the further development of such approaches, designed to deal with systems with many degrees of freedom, have become more urgent.

  10. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons

    PubMed Central

    2014-01-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829

  11. Statistical assessment of the learning curves of health technologies.

    PubMed

    Ramsay, C R; Grant, A M; Wallace, S A; Garthwaite, P H; Monk, A F; Russell, I T

    2001-01-01

    (1) To describe systematically studies that directly assessed the learning curve effect of health technologies. (2) Systematically to identify 'novel' statistical techniques applied to learning curve data in other fields, such as psychology and manufacturing. (3) To test these statistical techniques in data sets from studies of varying designs to assess health technologies in which learning curve effects are known to exist. METHODS - STUDY SELECTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): For a study to be included, it had to include a formal analysis of the learning curve of a health technology using a graphical, tabular or statistical technique. METHODS - STUDY SELECTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): For a study to be included, it had to include a formal assessment of a learning curve using a statistical technique that had not been identified in the previous search. METHODS - DATA SOURCES: Six clinical and 16 non-clinical biomedical databases were searched. A limited amount of handsearching and scanning of reference lists was also undertaken. METHODS - DATA EXTRACTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): A number of study characteristics were abstracted from the papers such as study design, study size, number of operators and the statistical method used. METHODS - DATA EXTRACTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): The new statistical techniques identified were categorised into four subgroups of increasing complexity: exploratory data analysis; simple series data analysis; complex data structure analysis, generic techniques. METHODS - TESTING OF STATISTICAL METHODS: Some of the statistical methods identified in the systematic searches for single (simple) operator series data and for multiple (complex) operator series data were illustrated and explored using three data sets. The first was a case series of 190 consecutive laparoscopic fundoplication procedures performed by a single surgeon; the second was a case series of consecutive laparoscopic cholecystectomy procedures performed by ten surgeons; the third was randomised trial data derived from the laparoscopic procedure arm of a multicentre trial of groin hernia repair, supplemented by data from non-randomised operations performed during the trial. RESULTS - HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW: Of 4571 abstracts identified, 272 (6%) were later included in the study after review of the full paper. Some 51% of studies assessed a surgical minimal access technique and 95% were case series. The statistical method used most often (60%) was splitting the data into consecutive parts (such as halves or thirds), with only 14% attempting a more formal statistical analysis. The reporting of the studies was poor, with 31% giving no details of data collection methods. RESULTS - NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH: Of 9431 abstracts assessed, 115 (1%) were deemed appropriate for further investigation and, of these, 18 were included in the study. All of the methods for complex data sets were identified in the non-clinical literature. These were discriminant analysis, two-stage estimation of learning rates, generalised estimating equations, multilevel models, latent curve models, time series models and stochastic parameter models. In addition, eight new shapes of learning curves were identified. RESULTS - TESTING OF STATISTICAL METHODS: No one particular shape of learning curve performed significantly better than another. The performance of 'operation time' as a proxy for learning differed between the three procedures. Multilevel modelling using the laparoscopic cholecystectomy data demonstrated and measured surgeon-specific and confounding effects. The inclusion of non-randomised cases, despite the possible limitations of the method, enhanced the interpretation of learning effects. CONCLUSIONS - HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW: The statistical methods used for assessing learning effects in health technology assessment have been crude and the reporting of studies poor. CONCLUSIONS - NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH: A number of statistical methods for assessing learning effects were identified that had not hitherto been used in health technology assessment. There was a hierarchy of methods for the identification and measurement of learning, and the more sophisticated methods for both have had little if any use in health technology assessment. This demonstrated the value of considering fields outside clinical research when addressing methodological issues in health technology assessment. CONCLUSIONS - TESTING OF STATISTICAL METHODS: It has been demonstrated that the portfolio of techniques identified can enhance investigations of learning curve effects. (ABSTRACT TRUNCATED)

  12. National Aquatic Resource Surveys: Integration of Geospatial Data in Their Survey Design and Analysis

    EPA Science Inventory

    The National Aquatic Resource Surveys (NARS) are a series of four statistical surveys conducted by the U.S. Environmental Protection Agency working in collaboration with states, tribal nations and other federal agencies. The surveys are conducted for lakes and reservoirs, streams...

  13. National Wetland Condition Assessment 2011: A Collaborative Survey of the Nation's Wetlands

    EPA Science Inventory

    The National Wetland Condition Assessment 2011: A Collaborative Survey presents the results of an unprecedented assessment of the nation’s wetlands. This report is part of the National Aquatic Resource Surveys, a series of statistically based surveys designed to provide the publi...

  14. National Lakes Assessment: A Collaborative Survey of the Nation's Lakes

    EPA Science Inventory

    The National Lakes Assessment A Collaborative Survey of the Nation's Lakes presents the results of an unprecedented assessment of the nation’s lakes. This report is part of the National Aquatic Resource Surveys, a series of statistically based surveys designed to provide the pub...

  15. National Rivers and Streams Assessment 2008-2009: A Collaborative Survey

    EPA Science Inventory

    The National Rivers and Streams Assessment 2008–2009: A Collaborative Survey (NRSA) presents the results of an unprecedented assessment of the nation’s rivers and streams. This report is part of the National Aquatic Resource Surveys, a series of statistically based surveys design...

  16. Quasi-experimental study designs series-paper 10: synthesizing evidence for effects collected from quasi-experimental studies presents surmountable challenges.

    PubMed

    Becker, Betsy Jane; Aloe, Ariel M; Duvendack, Maren; Stanley, T D; Valentine, Jeffrey C; Fretheim, Atle; Tugwell, Peter

    2017-09-01

    To outline issues of importance to analytic approaches to the synthesis of quasi-experiments (QEs) and to provide a statistical model for use in analysis. We drew on studies of statistics, epidemiology, and social-science methodology to outline methods for synthesis of QE studies. The design and conduct of QEs, effect sizes from QEs, and moderator variables for the analysis of those effect sizes were discussed. Biases, confounding, design complexities, and comparisons across designs offer serious challenges to syntheses of QEs. Key components of meta-analyses of QEs were identified, including the aspects of QE study design to be coded and analyzed. Of utmost importance are the design and statistical controls implemented in the QEs. Such controls and any potential sources of bias and confounding must be modeled in analyses, along with aspects of the interventions and populations studied. Because of such controls, effect sizes from QEs are more complex than those from randomized experiments. A statistical meta-regression model that incorporates important features of the QEs under review was presented. Meta-analyses of QEs provide particular challenges, but thorough coding of intervention characteristics and study methods, along with careful analysis, should allow for sound inferences. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Software Design Challenges in Time Series Prediction Systems Using Parallel Implementation of Artificial Neural Networks.

    PubMed

    Manikandan, Narayanan; Subha, Srinivasan

    2016-01-01

    Software development life cycle has been characterized by destructive disconnects between activities like planning, analysis, design, and programming. Particularly software developed with prediction based results is always a big challenge for designers. Time series data forecasting like currency exchange, stock prices, and weather report are some of the areas where an extensive research is going on for the last three decades. In the initial days, the problems with financial analysis and prediction were solved by statistical models and methods. For the last two decades, a large number of Artificial Neural Networks based learning models have been proposed to solve the problems of financial data and get accurate results in prediction of the future trends and prices. This paper addressed some architectural design related issues for performance improvement through vectorising the strengths of multivariate econometric time series models and Artificial Neural Networks. It provides an adaptive approach for predicting exchange rates and it can be called hybrid methodology for predicting exchange rates. This framework is tested for finding the accuracy and performance of parallel algorithms used.

  18. Software Design Challenges in Time Series Prediction Systems Using Parallel Implementation of Artificial Neural Networks

    PubMed Central

    Manikandan, Narayanan; Subha, Srinivasan

    2016-01-01

    Software development life cycle has been characterized by destructive disconnects between activities like planning, analysis, design, and programming. Particularly software developed with prediction based results is always a big challenge for designers. Time series data forecasting like currency exchange, stock prices, and weather report are some of the areas where an extensive research is going on for the last three decades. In the initial days, the problems with financial analysis and prediction were solved by statistical models and methods. For the last two decades, a large number of Artificial Neural Networks based learning models have been proposed to solve the problems of financial data and get accurate results in prediction of the future trends and prices. This paper addressed some architectural design related issues for performance improvement through vectorising the strengths of multivariate econometric time series models and Artificial Neural Networks. It provides an adaptive approach for predicting exchange rates and it can be called hybrid methodology for predicting exchange rates. This framework is tested for finding the accuracy and performance of parallel algorithms used. PMID:26881271

  19. Pediatric emotional dysregulation and behavioral disruptiveness treated with hypnosis: a time-series design.

    PubMed

    Iglesias, Alex; Iglesias, Adam

    2014-01-01

    A case of pediatric oppositional defiant disorder (ODD) with concomitant emotional dysregulation and secondary behavioral disruptiveness was treated with hypnosis by means of the hypnotic hold, a method adapted by the authors. An A-B-A-B time-series design with multiple replications was employed to measure the relationship of the hypnotic treatment to the dependent measure: episodes of emotional dysregulation with accompanying behavioral disruptiveness. The findings indicated a statistically significant relationship between the degree of change from phase to phase and the treatment. Follow-up at 6 months indicated a significant reduction of the frequency of targeted episodes of emotional dysregulation and behavioral disruptiveness at home.

  20. The 1993 Mississippi river flood: A one hundred or a one thousand year event?

    USGS Publications Warehouse

    Malamud, B.D.; Turcotte, D.L.; Barton, C.C.

    1996-01-01

    Power-law (fractal) extreme-value statistics are applicable to many natural phenomena under a wide variety of circumstances. Data from a hydrologic station in Keokuk, Iowa, shows the great flood of the Mississippi River in 1993 has a recurrence interval on the order of 100 years using power-law statistics applied to partial-duration flood series and on the order of 1,000 years using a log-Pearson type 3 (LP3) distribution applied to annual series. The LP3 analysis is the federally adopted probability distribution for flood-frequency estimation of extreme events. We suggest that power-law statistics are preferable to LP3 analysis. As a further test of the power-law approach we consider paleoflood data from the Colorado River. We compare power-law and LP3 extrapolations of historical data with these paleo-floods. The results are remarkably similar to those obtained for the Mississippi River: Recurrence intervals from power-law statistics applied to Lees Ferry discharge data are generally consistent with inferred 100- and 1,000-year paleofloods, whereas LP3 analysis gives recurrence intervals that are orders of magnitude longer. For both the Keokuk and Lees Ferry gauges, the use of an annual series introduces an artificial curvature in log-log space that leads to an underestimate of severe floods. Power-law statistics are predicting much shorter recurrence intervals than the federally adopted LP3 statistics. We suggest that if power-law behavior is applicable, then the likelihood of severe floods is much higher. More conservative dam designs and land-use restrictions Nay be required.

  1. Content-based VLE designs improve learning efficiency in constructivist statistics education.

    PubMed

    Wessa, Patrick; De Rycker, Antoon; Holliday, Ian Edward

    2011-01-01

    We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE) in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses), which required us to develop a specific-purpose Statistical Learning Environment (SLE) based on Reproducible Computing and newly developed Peer Review (PR) technology. The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student population under investigation. The findings demonstrate that a content-based design outperforms the traditional VLE-based design.

  2. Using VITA Service Learning Experiences to Teach Hypothesis Testing and P-Value Analysis

    ERIC Educational Resources Information Center

    Drougas, Anne; Harrington, Steve

    2011-01-01

    This paper describes a hypothesis testing project designed to capture student interest and stimulate classroom interaction and communication. Using an online survey instrument, the authors collected student demographic information and data regarding university service learning experiences. Introductory statistics students performed a series of…

  3. 78 FR 11661 - Request for Information: Main Study Design for the National Children's Study

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-19

    ....nationalchildrensstudy.gov/research/workshops/Pages/nationalacademyofsciencesworkshop.aspx . DATES: RFI Release Date is...-response relationships, substudies embedded in the Vanguard Study or the Main Study, and formative research... fall of 2012, the NCS held a series of meetings with federal and non-federal statistical sampling...

  4. National Lakes Assessment 2012: A Collaborative Survey of Lakes in the United States

    EPA Science Inventory

    The National Lakes Assessment 2012: A Collaborative Survey of Lakes in the United States presents the results of a second evaluation of the lakes in the United States. This report is part of the National Aquatic Resource Surveys, a series of statistically based surveys designed t...

  5. Virtualising the Quantitative Research Methods Course: An Island-Based Approach

    ERIC Educational Resources Information Center

    Baglin, James; Reece, John; Baker, Jenalle

    2015-01-01

    Many recent improvements in pedagogical practice have been enabled by the rapid development of innovative technologies, particularly for teaching quantitative research methods and statistics. This study describes the design, implementation, and evaluation of a series of specialised computer laboratory sessions. The sessions combined the use of an…

  6. Statistics for clinical nursing practice: an introduction.

    PubMed

    Rickard, Claire M

    2008-11-01

    Difficulty in understanding statistics is one of the most frequently reported barriers to nurses applying research results in their practice. Yet the amount of nursing research published each year continues to grow, as does the expectation that nurses will undertake practice based on this evidence. Critical care nurses do not need to be statisticians, but they do need to develop a working knowledge of statistics so they can be informed consumers of research and so practice can evolve and improve. For those undertaking a research project, statistical literacy is required to interact with other researchers and statisticians, so as to best design and undertake the project. This article is the first in a series that guides critical care nurses through statistical terms and concepts relevant to their practice.

  7. Arlequin suite ver 3.5: a new series of programs to perform population genetics analyses under Linux and Windows.

    PubMed

    Excoffier, Laurent; Lischer, Heidi E L

    2010-05-01

    We present here a new version of the Arlequin program available under three different forms: a Windows graphical version (Winarl35), a console version of Arlequin (arlecore), and a specific console version to compute summary statistics (arlsumstat). The command-line versions run under both Linux and Windows. The main innovations of the new version include enhanced outputs in XML format, the possibility to embed graphics displaying computation results directly into output files, and the implementation of a new method to detect loci under selection from genome scans. Command-line versions are designed to handle large series of files, and arlsumstat can be used to generate summary statistics from simulated data sets within an Approximate Bayesian Computation framework. © 2010 Blackwell Publishing Ltd.

  8. [Design and implementation of online statistical analysis function in information system of air pollution and health impact monitoring].

    PubMed

    Lü, Yiran; Hao, Shuxin; Zhang, Guoqing; Liu, Jie; Liu, Yue; Xu, Dongqun

    2018-01-01

    To implement the online statistical analysis function in information system of air pollution and health impact monitoring, and obtain the data analysis information real-time. Using the descriptive statistical method as well as time-series analysis and multivariate regression analysis, SQL language and visual tools to implement online statistical analysis based on database software. Generate basic statistical tables and summary tables of air pollution exposure and health impact data online; Generate tendency charts of each data part online and proceed interaction connecting to database; Generate butting sheets which can lead to R, SAS and SPSS directly online. The information system air pollution and health impact monitoring implements the statistical analysis function online, which can provide real-time analysis result to its users.

  9. Safe and effective nursing shift handover with NURSEPASS: An interrupted time series.

    PubMed

    Smeulers, Marian; Dolman, Christine D; Atema, Danielle; van Dieren, Susan; Maaskant, Jolanda M; Vermeulen, Hester

    2016-11-01

    Implementation of a locally developed evidence based nursing shift handover blueprint with a bedside-safety-check to determine the effect size on quality of handover. A mixed methods design with: (1) an interrupted time series analysis to determine the effect on handover quality in six domains; (2) descriptive statistics to analyze the intercepted discrepancies by the bedside-safety-check; (3) evaluation sessions to gather experiences with the new handover process. We observed a continued trend of improvement in handover quality and a significant improvement in two domains of handover: organization/efficiency and contents. The bedside-safety-check successfully identified discrepancies on drains, intravenous medications, bandages or general condition and was highly appreciated. Use of the nursing shift handover blueprint showed promising results on effectiveness as well as on feasibility and acceptability. However, to enable long term measurement on effectiveness, evaluation with large scale interrupted times series or statistical process control is needed. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Load Balancing Using Time Series Analysis for Soft Real Time Systems with Statistically Periodic Loads

    NASA Technical Reports Server (NTRS)

    Hailperin, Max

    1993-01-01

    This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.

  11. Comparison of Statistical Algorithms for the Detection of Infectious Disease Outbreaks in Large Multiple Surveillance Systems

    PubMed Central

    Farrington, C. Paddy; Noufaily, Angela; Andrews, Nick J.; Charlett, Andre

    2016-01-01

    A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749

  12. Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.

    PubMed

    Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C

    2015-02-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  13. Women of the World: Sub-Saharan Africa.

    ERIC Educational Resources Information Center

    Newman, Jeanne S.

    The second in a series of five handbooks designed to present and analyze statistical data on women in various regions of the world, this handbook focuses on women in 40 countries of Sub-Saharan Africa. Beginning with an overview of population characteristics in the region, the analysis continues with a description of women's literacy and…

  14. Approaches in highly parameterized inversion: TSPROC, a general time-series processor to assist in model calibration and result summarization

    USGS Publications Warehouse

    Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.

    2012-01-01

    The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.

  15. A Deep Machine Learning Method for Classifying Cyclic Time Series of Biological Signals Using Time-Growing Neural Network.

    PubMed

    Gharehbaghi, Arash; Linden, Maria

    2017-10-12

    This paper presents a novel method for learning the cyclic contents of stochastic time series: the deep time-growing neural network (DTGNN). The DTGNN combines supervised and unsupervised methods in different levels of learning for an enhanced performance. It is employed by a multiscale learning structure to classify cyclic time series (CTS), in which the dynamic contents of the time series are preserved in an efficient manner. This paper suggests a systematic procedure for finding the design parameter of the classification method for a one-versus-multiple class application. A novel validation method is also suggested for evaluating the structural risk, both in a quantitative and a qualitative manner. The effect of the DTGNN on the performance of the classifier is statistically validated through the repeated random subsampling using different sets of CTS, from different medical applications. The validation involves four medical databases, comprised of 108 recordings of the electroencephalogram signal, 90 recordings of the electromyogram signal, 130 recordings of the heart sound signal, and 50 recordings of the respiratory sound signal. Results of the statistical validations show that the DTGNN significantly improves the performance of the classification and also exhibits an optimal structural risk.

  16. Non-parametric characterization of long-term rainfall time series

    NASA Astrophysics Data System (ADS)

    Tiwari, Harinarayan; Pandey, Brij Kishor

    2018-03-01

    The statistical study of rainfall time series is one of the approaches for efficient hydrological system design. Identifying, and characterizing long-term rainfall time series could aid in improving hydrological systems forecasting. In the present study, eventual statistics was applied for the long-term (1851-2006) rainfall time series under seven meteorological regions of India. Linear trend analysis was carried out using Mann-Kendall test for the observed rainfall series. The observed trend using the above-mentioned approach has been ascertained using the innovative trend analysis method. Innovative trend analysis has been found to be a strong tool to detect the general trend of rainfall time series. Sequential Mann-Kendall test has also been carried out to examine nonlinear trends of the series. The partial sum of cumulative deviation test is also found to be suitable to detect the nonlinear trend. Innovative trend analysis, sequential Mann-Kendall test and partial cumulative deviation test have potential to detect the general as well as nonlinear trend for the rainfall time series. Annual rainfall analysis suggests that the maximum changes in mean rainfall is 11.53% for West Peninsular India, whereas the maximum fall in mean rainfall is 7.8% for the North Mountainous Indian region. The innovative trend analysis method is also capable of finding the number of change point available in the time series. Additionally, we have performed von Neumann ratio test and cumulative deviation test to estimate the departure from homogeneity. Singular spectrum analysis has been applied in this study to evaluate the order of departure from homogeneity in the rainfall time series. Monsoon season (JS) of North Mountainous India and West Peninsular India zones has higher departure from homogeneity and singular spectrum analysis shows the results to be in coherence with the same.

  17. Production Process for Financial Data Tables in FHWA’s Highway Statistics Series - Highway Statistics Series: Revenue, Debt, and Expenditures.

    DOT National Transportation Integrated Search

    2015-11-01

    This report captures the production process and programs and steps used to produce the finance tables and charts published in the Federal Highway Administrations (FHWAs) Highway Statistics Series publication site made available to Congress and ...

  18. Descriptive data analysis.

    PubMed

    Thompson, Cheryl Bagley

    2009-01-01

    This 13th article of the Basics of Research series is first in a short series on statistical analysis. These articles will discuss creating your statistical analysis plan, levels of measurement, descriptive statistics, probability theory, inferential statistics, and general considerations for interpretation of the results of a statistical analysis.

  19. Volcanic hazard assessment for the Canary Islands (Spain) using extreme value theory, and the recent volcanic eruption of El Hierro

    NASA Astrophysics Data System (ADS)

    Sobradelo, R.; Martí, J.; Mendoza-Rosas, A. T.; Gómez, G.

    2012-04-01

    The Canary Islands are an active volcanic region densely populated and visited by several millions of tourists every year. Nearly twenty eruptions have been reported through written chronicles in the last 600 years, suggesting that the probability of a new eruption in the near future is far from zero. This shows the importance of assessing and monitoring the volcanic hazard of the region in order to reduce and manage its potential volcanic risk, and ultimately contribute to the design of appropriate preparedness plans. Hence, the probabilistic analysis of the volcanic eruption time series for the Canary Islands is an essential step for the assessment of volcanic hazard and risk in the area. Such a series describes complex processes involving different types of eruptions over different time scales. Here we propose a statistical method for calculating the probabilities of future eruptions which is most appropriate given the nature of the documented historical eruptive data. We first characterise the eruptions by their magnitudes, and then carry out a preliminary analysis of the data to establish the requirements for the statistical method. Past studies in eruptive time series used conventional statistics and treated the series as an homogeneous process. In this paper, we will use a method that accounts for the time-dependence of the series and includes rare or extreme events, in the form of few data of large eruptions, since these data require special methods of analysis. Hence, we will use a statistical method from extreme value theory. In particular, we will apply a non-homogeneous Poisson process to the historical eruptive data of the Canary Islands to estimate the probability of having at least one volcanic event of a magnitude greater than one in the upcoming years. Shortly after the publication of this method an eruption in the island of El Hierro took place for the first time in historical times, supporting our method and contributing towards the validation of our results.

  20. Environmental flow allocation and statistics calculator

    USGS Publications Warehouse

    Konrad, Christopher P.

    2011-01-01

    The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.

  1. An Improved Rank Correlation Effect Size Statistic for Single-Case Designs: Baseline Corrected Tau.

    PubMed

    Tarlow, Kevin R

    2017-07-01

    Measuring treatment effects when an individual's pretreatment performance is improving poses a challenge for single-case experimental designs. It may be difficult to determine whether improvement is due to the treatment or due to the preexisting baseline trend. Tau- U is a popular single-case effect size statistic that purports to control for baseline trend. However, despite its strengths, Tau- U has substantial limitations: Its values are inflated and not bound between -1 and +1, it cannot be visually graphed, and its relatively weak method of trend control leads to unacceptable levels of Type I error wherein ineffective treatments appear effective. An improved effect size statistic based on rank correlation and robust regression, Baseline Corrected Tau, is proposed and field-tested with both published and simulated single-case time series. A web-based calculator for Baseline Corrected Tau is also introduced for use by single-case investigators.

  2. Effect of trial-to-trial variability on optimal event-related fMRI design: Implications for Beta-series correlation and multi-voxel pattern analysis

    PubMed Central

    Abdulrahman, Hunar; Henson, Richard N.

    2016-01-01

    Functional magnetic resonance imaging (fMRI) studies typically employ rapid, event-related designs for behavioral reasons and for reasons associated with statistical efficiency. Efficiency is calculated from the precision of the parameters (Betas) estimated from a General Linear Model (GLM) in which trial onsets are convolved with a Hemodynamic Response Function (HRF). However, previous calculations of efficiency have ignored likely variability in the neural response from trial to trial, for example due to attentional fluctuations, or different stimuli across trials. Here we compare three GLMs in their efficiency for estimating average and individual Betas across trials as a function of trial variability, scan noise and Stimulus Onset Asynchrony (SOA): “Least Squares All” (LSA), “Least Squares Separate” (LSS) and “Least Squares Unitary” (LSU). Estimation of responses to individual trials in particular is important for both functional connectivity using “Beta-series correlation” and “multi-voxel pattern analysis” (MVPA). Our simulations show that the ratio of trial-to-trial variability to scan noise impacts both the optimal SOA and optimal GLM, especially for short SOAs < 5 s: LSA is better when this ratio is high, whereas LSS and LSU are better when the ratio is low. For MVPA, the consistency across voxels of trial variability and of scan noise is also critical. These findings not only have important implications for design of experiments using Beta-series regression and MVPA, but also statistical parametric mapping studies that seek only efficient estimation of the mean response across trials. PMID:26549299

  3. Raising the bar for reproducible science at the U.S. Environmental Protection Agency Office of Research and Development.

    PubMed

    George, Barbara Jane; Sobus, Jon R; Phelps, Lara P; Rashleigh, Brenda; Simmons, Jane Ellen; Hines, Ronald N

    2015-05-01

    Considerable concern has been raised regarding research reproducibility both within and outside the scientific community. Several factors possibly contribute to a lack of reproducibility, including a failure to adequately employ statistical considerations during study design, bias in sample selection or subject recruitment, errors in developing data inclusion/exclusion criteria, and flawed statistical analysis. To address some of these issues, several publishers have developed checklists that authors must complete. Others have either enhanced statistical expertise on existing editorial boards, or formed distinct statistics editorial boards. Although the U.S. Environmental Protection Agency, Office of Research and Development, already has a strong Quality Assurance Program, an initiative was undertaken to further strengthen statistics consideration and other factors in study design and also to ensure these same factors are evaluated during the review and approval of study protocols. To raise awareness of the importance of statistical issues and provide a forum for robust discussion, a Community of Practice for Statistics was formed in January 2014. In addition, three working groups were established to develop a series of questions or criteria that should be considered when designing or reviewing experimental, observational, or modeling focused research. This article describes the process used to develop these study design guidance documents, their contents, how they are being employed by the Agency's research enterprise, and expected benefits to Agency science. The process and guidance documents presented here may be of utility for any research enterprise interested in enhancing the reproducibility of its science. © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology.

  4. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    NASA Astrophysics Data System (ADS)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  5. Radiocarbon dating uncertainty and the reliability of the PEWMA method of time-series analysis for research on long-term human-environment interaction

    PubMed Central

    Carleton, W. Christopher; Campbell, David

    2018-01-01

    Statistical time-series analysis has the potential to improve our understanding of human-environment interaction in deep time. However, radiocarbon dating—the most common chronometric technique in archaeological and palaeoenvironmental research—creates challenges for established statistical methods. The methods assume that observations in a time-series are precisely dated, but this assumption is often violated when calibrated radiocarbon dates are used because they usually have highly irregular uncertainties. As a result, it is unclear whether the methods can be reliably used on radiocarbon-dated time-series. With this in mind, we conducted a large simulation study to investigate the impact of chronological uncertainty on a potentially useful time-series method. The method is a type of regression involving a prediction algorithm called the Poisson Exponentially Weighted Moving Average (PEMWA). It is designed for use with count time-series data, which makes it applicable to a wide range of questions about human-environment interaction in deep time. Our simulations suggest that the PEWMA method can often correctly identify relationships between time-series despite chronological uncertainty. When two time-series are correlated with a coefficient of 0.25, the method is able to identify that relationship correctly 20–30% of the time, providing the time-series contain low noise levels. With correlations of around 0.5, it is capable of correctly identifying correlations despite chronological uncertainty more than 90% of the time. While further testing is desirable, these findings indicate that the method can be used to test hypotheses about long-term human-environment interaction with a reasonable degree of confidence. PMID:29351329

  6. Radiocarbon dating uncertainty and the reliability of the PEWMA method of time-series analysis for research on long-term human-environment interaction.

    PubMed

    Carleton, W Christopher; Campbell, David; Collard, Mark

    2018-01-01

    Statistical time-series analysis has the potential to improve our understanding of human-environment interaction in deep time. However, radiocarbon dating-the most common chronometric technique in archaeological and palaeoenvironmental research-creates challenges for established statistical methods. The methods assume that observations in a time-series are precisely dated, but this assumption is often violated when calibrated radiocarbon dates are used because they usually have highly irregular uncertainties. As a result, it is unclear whether the methods can be reliably used on radiocarbon-dated time-series. With this in mind, we conducted a large simulation study to investigate the impact of chronological uncertainty on a potentially useful time-series method. The method is a type of regression involving a prediction algorithm called the Poisson Exponentially Weighted Moving Average (PEMWA). It is designed for use with count time-series data, which makes it applicable to a wide range of questions about human-environment interaction in deep time. Our simulations suggest that the PEWMA method can often correctly identify relationships between time-series despite chronological uncertainty. When two time-series are correlated with a coefficient of 0.25, the method is able to identify that relationship correctly 20-30% of the time, providing the time-series contain low noise levels. With correlations of around 0.5, it is capable of correctly identifying correlations despite chronological uncertainty more than 90% of the time. While further testing is desirable, these findings indicate that the method can be used to test hypotheses about long-term human-environment interaction with a reasonable degree of confidence.

  7. DataTrack 6: Blacks and Hispanics in the United States.

    ERIC Educational Resources Information Center

    American Council of Life Insurance, Washington, DC.

    Sixth in a series of reports which compile and interpret statistical information of direct concern to life insurance executives, this report deals with Blacks and Hispanics in the United States. It can be used in the design of new products and services to meet changing consumer needs, the selection of new markets and marketing strategies, the…

  8. Women of the World: Asia and the Pacific.

    ERIC Educational Resources Information Center

    Shah, Nasra M.

    The fourth in a series of five handbooks designed to present and analyze statistical data on women in various regions of the world, this handbook focuses on women in 14 countries of Asia and the Pacific. Beginning with an overview of population distribution and changes in the region, the analysis continues with a description of women's literacy…

  9. Women of the World: Latin America and the Caribbean.

    ERIC Educational Resources Information Center

    Chaney, Elsa M.

    The first in a series of five handbooks designed to present and analyze statistical data on women in various regions of the world, this handbook focuses on women in 21 countries in Latin America and the Caribbean. Beginning with an overview of population characteristics of the regions, the analysis continues with a description of women's literacy…

  10. Statistical sampling methods for soils monitoring

    Treesearch

    Ann M. Abbott

    2010-01-01

    Development of the best sampling design to answer a research question should be an interactive venture between the land manager or researcher and statisticians, and is the result of answering various questions. A series of questions that can be asked to guide the researcher in making decisions that will arrive at an effective sampling plan are described, and a case...

  11. The New Alcoholics: Teenagers. Public Affairs Pamphlet No. 499.

    ERIC Educational Resources Information Center

    Saltman, Jules

    This brief pamphlet on teenage alcoholism is one in a series published by the Public Affairs Committee. It was designed to give concise and useful information on teenage alcohol problems, and was written for both adults and youth. Statistics are offered as proof that large numbers of American teenagers are already problem drinkers. The current…

  12. Statistical Optics

    NASA Astrophysics Data System (ADS)

    Goodman, Joseph W.

    2000-07-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research

  13. Time Series Analysis Based on Running Mann Whitney Z Statistics

    USDA-ARS?s Scientific Manuscript database

    A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...

  14. Volcanic hazard assessment for the Canary Islands (Spain) using extreme value theory

    NASA Astrophysics Data System (ADS)

    Sobradelo, R.; Martí, J.; Mendoza-Rosas, A. T.; Gómez, G.

    2011-10-01

    The Canary Islands are an active volcanic region densely populated and visited by several millions of tourists every year. Nearly twenty eruptions have been reported through written chronicles in the last 600 yr, suggesting that the probability of a new eruption in the near future is far from zero. This shows the importance of assessing and monitoring the volcanic hazard of the region in order to reduce and manage its potential volcanic risk, and ultimately contribute to the design of appropriate preparedness plans. Hence, the probabilistic analysis of the volcanic eruption time series for the Canary Islands is an essential step for the assessment of volcanic hazard and risk in the area. Such a series describes complex processes involving different types of eruptions over different time scales. Here we propose a statistical method for calculating the probabilities of future eruptions which is most appropriate given the nature of the documented historical eruptive data. We first characterize the eruptions by their magnitudes, and then carry out a preliminary analysis of the data to establish the requirements for the statistical method. Past studies in eruptive time series used conventional statistics and treated the series as an homogeneous process. In this paper, we will use a method that accounts for the time-dependence of the series and includes rare or extreme events, in the form of few data of large eruptions, since these data require special methods of analysis. Hence, we will use a statistical method from extreme value theory. In particular, we will apply a non-homogeneous Poisson process to the historical eruptive data of the Canary Islands to estimate the probability of having at least one volcanic event of a magnitude greater than one in the upcoming years. This is done in three steps: First, we analyze the historical eruptive series to assess independence and homogeneity of the process. Second, we perform a Weibull analysis of the distribution of repose time between successive eruptions. Third, we analyze the non-homogeneous Poisson process with a generalized Pareto distribution as the intensity function.

  15. Content-Based VLE Designs Improve Learning Efficiency in Constructivist Statistics Education

    PubMed Central

    Wessa, Patrick; De Rycker, Antoon; Holliday, Ian Edward

    2011-01-01

    Background We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE) in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses), which required us to develop a specific–purpose Statistical Learning Environment (SLE) based on Reproducible Computing and newly developed Peer Review (PR) technology. Objectives The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. Methods Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. Results The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student population under investigation. The findings demonstrate that a content–based design outperforms the traditional VLE–based design. PMID:21998652

  16. Evaluating the efficiency of environmental monitoring programs

    USGS Publications Warehouse

    Levine, Carrie R.; Yanai, Ruth D.; Lampman, Gregory G.; Burns, Douglas A.; Driscoll, Charles T.; Lawrence, Gregory B.; Lynch, Jason; Schoch, Nina

    2014-01-01

    Statistical uncertainty analyses can be used to improve the efficiency of environmental monitoring, allowing sampling designs to maximize information gained relative to resources required for data collection and analysis. In this paper, we illustrate four methods of data analysis appropriate to four types of environmental monitoring designs. To analyze a long-term record from a single site, we applied a general linear model to weekly stream chemistry data at Biscuit Brook, NY, to simulate the effects of reducing sampling effort and to evaluate statistical confidence in the detection of change over time. To illustrate a detectable difference analysis, we analyzed a one-time survey of mercury concentrations in loon tissues in lakes in the Adirondack Park, NY, demonstrating the effects of sampling intensity on statistical power and the selection of a resampling interval. To illustrate a bootstrapping method, we analyzed the plot-level sampling intensity of forest inventory at the Hubbard Brook Experimental Forest, NH, to quantify the sampling regime needed to achieve a desired confidence interval. Finally, to analyze time-series data from multiple sites, we assessed the number of lakes and the number of samples per year needed to monitor change over time in Adirondack lake chemistry using a repeated-measures mixed-effects model. Evaluations of time series and synoptic long-term monitoring data can help determine whether sampling should be re-allocated in space or time to optimize the use of financial and human resources.

  17. Supporting creativity and appreciation of uncertainty in exploring geo-coded public health data.

    PubMed

    Thew, S L; Sutcliffe, A; de Bruijn, O; McNaught, J; Procter, R; Jarvis, Paul; Buchan, I

    2011-01-01

    We present a prototype visualisation tool, ADVISES (Adaptive Visualization for e-Science), designed to support epidemiologists and public health practitioners in exploring geo-coded datasets and generating spatial epidemiological hypotheses. The tool is designed to support creative thinking while providing the means for the user to evaluate the validity of the visualization in terms of statistical uncertainty. We present an overview of the application and the results of an evaluation exploring public health researchers' responses to maps as a new way of viewing familiar data, in particular the use of thematic maps with adjoining descriptive statistics and forest plots to support the generation and evaluation of new hypotheses. A series of qualitative evaluations involved one experienced researcher asking 21 volunteers to interact with the system to perform a series of relatively complex, realistic map-building and exploration tasks, using a 'think aloud' protocol, followed by a semi-structured interview The volunteers were academic epidemiologists and UK National Health Service analysts. All users quickly and confidently created maps, and went on to spend substantial amounts of time exploring and interacting with system, generating hypotheses about their maps. Our findings suggest that the tool is able to support creativity and statistical appreciation among public health professionals and epidemiologists building thematic maps. Software such as this, introduced appropriately, could increase the capability of existing personnel for generating public health intelligence.

  18. Space shuttle simulation model

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Smith, S. R.

    1980-01-01

    The effects of atmospheric turbulence in both horizontal and near horizontal flight, during the return of the space shuttle, are important for determining design, control, and 'pilot-in-the-loop' effects. A nonrecursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed which provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gust gradients. Based on this model, the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes which are entitled shuttle simulation turbulence tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 10,000 meters. The turbulence generation procedure is described as well as the results of validating the simulated turbulence. Conclusions and recommendations are presented and references cited. The tabulated one dimensional von Karman spectra and the results of spectral and statistical analyses of the SSTT are contained in the appendix.

  19. Design-of-experiments to Reduce Life-cycle Costs in Combat Aircraft Inlets

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Baust, Henry D.; Agrell, Johan

    2003-01-01

    It is the purpose of this study to demonstrate the viability and economy of Design- of-Experiments (DOE), to arrive at micro-secondary flow control installation designs that achieve optimal inlet performance for different mission strategies. These statistical design concepts were used to investigate the properties of "low unit strength" micro-effector installation. "Low unit strength" micro-effectors are micro-vanes, set a very low angle-of incidence, with very long chord lengths. They are designed to influence the neat wall inlet flow over an extended streamwise distance. In this study, however, the long chord lengths were replicated by a series of short chord length effectors arranged in series over multiple bands of effectors. In order to properly evaluate the performance differences between the single band extended chord length installation designs and the segmented multiband short chord length designs, both sets of installations must be optimal. Critical to achieving optimal micro-secondary flow control installation designs is the understanding of the factor interactions that occur between the multiple bands of micro-scale vane effectors. These factor interactions are best understood and brought together in an optimal manner through a structured DOE process, or more specifically Response Surface Methods (RSM).

  20. Load Balancing Using Time Series Analysis for Soft Real Time Systems with Statistically Periodic Loads

    NASA Technical Reports Server (NTRS)

    Hailperin, M.

    1993-01-01

    This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that the authors' techniques allow more accurate estimation of the global system loading, resulting in fewer object migrations than local methods. The authors' method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive load-balancing methods. Results from a preliminary analysis of another system and from simulation with a synthetic load provide some evidence of more general applicability.

  1. Impact of climate change on Precipitation and temperature under the RCP 8.5 and A1B scenarios in an Alpine Cathment (Alto-Genil Basin,southeast Spain). A comparison of statistical downscaling methods

    NASA Astrophysics Data System (ADS)

    Pulido-Velazquez, David; Juan Collados-Lara, Antonio; Pardo-Iguzquiza, Eulogio; Jimeno-Saez, Patricia; Fernandez-Chacon, Francisca

    2016-04-01

    In order to design adaptive strategies to global change we need to assess the future impact of climate change on water resources, which depends on precipitation and temperature series in the systems. The objective of this work is to generate future climate series in the "Alto Genil" Basin (southeast Spain) for the period 2071-2100 by perturbing the historical series using different statistical methods. For this targeted we use information coming from regionals climate model simulations (RCMs) available in two European projects, CORDEX (2013), with a spatial resolution of 12.5 km, and ENSEMBLES (2009), with a spatial resolution of 25 km. The historical climate series used for the period 1971-2000 have been obtained from Spain02 project (2012) which has the same spatial resolution that CORDEX project (both use the EURO-CORDEX grid). Two emission scenarios have been considered: the Representative Concentration Pathways (RCP) 8.5 emissions scenario, which is the most unfavorable scenario considered in the fifth Assessment Report (AR5) by the Intergovernmental Panel on Climate Change (IPCC), and the A1B emission scenario of fourth Assessment Report (AR4). We use the RCM simulations to create an ensemble of predictions weighting their information according to their ability to reproduce the main statistic of the historical climatology. A multi-objective analysis has been performed to identify which models are better in terms of goodness of fit to the cited statistic of the historical series. The ensemble of the CORDEX and the ENSEMBLES projects has been finally created with nine and four models respectively. These ensemble series have been used to assess the anomalies in mean and standard deviation (differences between the control and future RCM series). A "delta-change" method (Pulido-Velazquez et al., 2011) has been applied to define future series by modifying the historical climate series in accordance with the cited anomalies in mean and standard deviation. A comparison between results for scenario A1B and RCP8.5 has been performed. The reduction obtained for the mean rainfall respect to the historical are 24.2 % and 24.4 % respectively, and the increment in the temperature are 46.3 % and 31.2 % respectively. A sensitivity analysis of the results to the statistical downscaling techniques employed has been performed. The next techniques have been explored: Perturbation method or "delta-change"; Regression method (a regression function which relates the RCM and the historic information will be used to generate future climate series for the fixed period); Quantile mapping, (it attempts to find a transformation function which relates the observed variable and the modeled variable maintaining an statistical distribution equals the observed variable); Stochastic weather generator (SWG): They can be uni-site or multi-site (which considers the spatial correlation of climatic series). A comparative analysis of these techniques has been performed identifying the advantages and disadvantages of each of them. Acknowledgments: This research has been partially supported by the GESINHIMPADAPT project (CGL2013-48424-C2-2-R) with Spanish MINECO funds. We would also like to thank Spain02, ENSEMBLES and CORDEX projects for the data provided for this study.

  2. Generation and Validation of Spatial Distribution of Hourly Wind Speed Time-Series using Machine Learning

    NASA Astrophysics Data System (ADS)

    Veronesi, F.; Grassi, S.

    2016-09-01

    Wind resource assessment is a key aspect of wind farm planning since it allows to estimate the long term electricity production. Moreover, wind speed time-series at high resolution are helpful to estimate the temporal changes of the electricity generation and indispensable to design stand-alone systems, which are affected by the mismatch of supply and demand. In this work, we present a new generalized statistical methodology to generate the spatial distribution of wind speed time-series, using Switzerland as a case study. This research is based upon a machine learning model and demonstrates that statistical wind resource assessment can successfully be used for estimating wind speed time-series. In fact, this method is able to obtain reliable wind speed estimates and propagate all the sources of uncertainty (from the measurements to the mapping process) in an efficient way, i.e. minimizing computational time and load. This allows not only an accurate estimation, but the creation of precise confidence intervals to map the stochasticity of the wind resource for a particular site. The validation shows that machine learning can minimize the bias of the wind speed hourly estimates. Moreover, for each mapped location this method delivers not only the mean wind speed, but also its confidence interval, which are crucial data for planners.

  3. Testing for nonlinearity in time series: The method of surrogate data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theiler, J.; Galdrikian, B.; Longtin, A.

    1991-01-01

    We describe a statistical approach for identifying nonlinearity in time series; in particular, we want to avoid claims of chaos when simpler models (such as linearly correlated noise) can explain the data. The method requires a careful statement of the null hypothesis which characterizes a candidate linear process, the generation of an ensemble of surrogate'' data sets which are similar to the original time series but consistent with the null hypothesis, and the computation of a discriminating statistic for the original and for each of the surrogate data sets. The idea is to test the original time series against themore » null hypothesis by checking whether the discriminating statistic computed for the original time series differs significantly from the statistics computed for each of the surrogate sets. We present algorithms for generating surrogate data under various null hypotheses, and we show the results of numerical experiments on artificial data using correlation dimension, Lyapunov exponent, and forecasting error as discriminating statistics. Finally, we consider a number of experimental time series -- including sunspots, electroencephalogram (EEG) signals, and fluid convection -- and evaluate the statistical significance of the evidence for nonlinear structure in each case. 56 refs., 8 figs.« less

  4. Generation of future potential scenarios in an Alpine Catchment by applying bias-correction techniques, delta-change approaches and stochastic Weather Generators at different spatial scale. Analysis of their influence on basic and drought statistics.

    NASA Astrophysics Data System (ADS)

    Collados-Lara, Antonio-Juan; Pulido-Velazquez, David; Pardo-Iguzquiza, Eulogio

    2017-04-01

    Assessing impacts of potential future climate change scenarios in precipitation and temperature is essential to design adaptive strategies in water resources systems. The objective of this work is to analyze the possibilities of different statistical downscaling methods to generate future potential scenarios in an Alpine Catchment from historical data and the available climate models simulations performed in the frame of the CORDEX EU project. The initial information employed to define these downscaling approaches are the historical climatic data (taken from the Spain02 project for the period 1971-2000 with a spatial resolution of 12.5 Km) and the future series provided by climatic models in the horizon period 2071-2100 . We have used information coming from nine climate model simulations (obtained from five different Regional climate models (RCM) nested to four different Global Climate Models (GCM)) from the European CORDEX project. In our application we have focused on the Representative Concentration Pathways (RCP) 8.5 emissions scenario, which is the most unfavorable scenario considered in the fifth Assessment Report (AR5) by the Intergovernmental Panel on Climate Change (IPCC). For each RCM we have generated future climate series for the period 2071-2100 by applying two different approaches, bias correction and delta change, and five different transformation techniques (first moment correction, first and second moment correction, regression functions, quantile mapping using distribution derived transformation and quantile mapping using empirical quantiles) for both of them. Ensembles of the obtained series were proposed to obtain more representative potential future climate scenarios to be employed to study potential impacts. In this work we propose a non-equifeaseble combination of the future series giving more weight to those coming from models (delta change approaches) or combination of models and techniques that provides better approximation to the basic and drought statistic of the historical data. A multi-objective analysis using basic statistics (mean, standard deviation and asymmetry coefficient) and droughts statistics (duration, magnitude and intensity) has been performed to identify which models are better in terms of goodness of fit to reproduce the historical series. The drought statistics have been obtained from the Standard Precipitation index (SPI) series using the Theory of Runs. This analysis allows discriminate the best RCM and the best combination of model and correction technique in the bias-correction method. We have also analyzed the possibilities of using different Stochastic Weather Generators to approximate the basic and droughts statistics of the historical series. These analyses have been performed in our case study in a lumped and in a distributed way in order to assess its sensibility to the spatial scale. The statistic of the future temperature series obtained with different ensemble options are quite homogeneous, but the precipitation shows a higher sensibility to the adopted method and spatial scale. The global increment in the mean temperature values are 31.79 %, 31.79 %, 31.03 % and 31.74 % for the distributed bias-correction, distributed delta-change, lumped bias-correction and lumped delta-change ensembles respectively and in the precipitation they are -25.48 %, -28.49 %, -26.42 % and -27.35% respectively. Acknowledgments: This research work has been partially supported by the GESINHIMPADAPT project (CGL2013-48424-C2-2-R) with Spanish MINECO funds. We would also like to thank Spain02 and CORDEX projects for the data provided for this study and the R package qmap.

  5. Child Abuse and Neglect Fatalities: Statistics and Interventions

    MedlinePlus

    ... and Interventions Child Abuse and Neglect Fatalities 2015: Statistics and Interventions Series Title: Numbers and Trends Author(s): ... this information. Child Abuse and Neglect Fatalities 2015: Statistics and Interventions Series: Numbers and Trends Year Published: ...

  6. A bibliometric analysis of statistical terms used in American Physical Therapy Association journals (2011-2012): evidence for educating physical therapists.

    PubMed

    Tilson, Julie K; Marshall, Katie; Tam, Jodi J; Fetters, Linda

    2016-04-22

    A primary barrier to the implementation of evidence based practice (EBP) in physical therapy is therapists' limited ability to understand and interpret statistics. Physical therapists demonstrate limited skills and report low self-efficacy for interpreting results of statistical procedures. While standards for physical therapist education include statistics, little empirical evidence is available to inform what should constitute such curricula. The purpose of this study was to conduct a census of the statistical terms and study designs used in physical therapy literature and to use the results to make recommendations for curricular development in physical therapist education. We conducted a bibliometric analysis of 14 peer-reviewed journals associated with the American Physical Therapy Association over 12 months (Oct 2011-Sept 2012). Trained raters recorded every statistical term appearing in identified systematic reviews, primary research reports, and case series and case reports. Investigator-reported study design was also recorded. Terms representing the same statistical test or concept were combined into a single, representative term. Cumulative percentage was used to identify the most common representative statistical terms. Common representative terms were organized into eight categories to inform curricular design. Of 485 articles reviewed, 391 met the inclusion criteria. These 391 articles used 532 different terms which were combined into 321 representative terms; 13.1 (sd = 8.0) terms per article. Eighty-one representative terms constituted 90% of all representative term occurrences. Of the remaining 240 representative terms, 105 (44%) were used in only one article. The most common study design was prospective cohort (32.5%). Physical therapy literature contains a large number of statistical terms and concepts for readers to navigate. However, in the year sampled, 81 representative terms accounted for 90% of all occurrences. These "common representative terms" can be used to inform curricula to promote physical therapists' skills, competency, and confidence in interpreting statistics in their professional literature. We make specific recommendations for curriculum development informed by our findings.

  7. Measuring Changes in Salaries and Wages in Public Schools: 1988 Edition. ERS School Management Reference Series.

    ERIC Educational Resources Information Center

    Educational Research Service, Arlington, VA.

    The Educational Research Service annually reports an important statistical measure, the Composite Indicator of Changes in Average Salaries and Wages Paid by Public School Systems (CIC), which is designed to reflect overall changes in average salaries and wages paid by school systems in much the same way that the Dow Jones Average or the Standard…

  8. Women of the World: Near East and North Africa.

    ERIC Educational Resources Information Center

    Chamie, Mary

    The third in a series of five handbooks designed to present and analyze statistical data on women in various regions of the world, this handbook focuses on women in 14 countries in the Near East and North Africa. Beginning with an overview of population distribution and changes in the region, the analysis continues with a description of women's…

  9. Impact of Monetary Incentives and Mailing Procedures: An Experiment in a Federally Sponsored Telephone Survey. Methodology Report. NCES 2006-066

    ERIC Educational Resources Information Center

    Brick, J. Michael; Hagedorn, Mary Collins; Montaquila, Jill; Roth, Shelley Brock; Chapman, Christopher

    2006-01-01

    The National Household Education Surveys Program (NHES) includes a series of random digit dial (RDD) surveys developed by the National Center for Education Statistics (NCES) in the Institute of Education Sciences, U.S. Department of Education. It is designed to collect information on important educational issues through telephone surveys of…

  10. Statistics without Tears: Complex Statistics with Simple Arithmetic

    ERIC Educational Resources Information Center

    Smith, Brian

    2011-01-01

    One of the often overlooked aspects of modern statistics is the analysis of time series data. Modern introductory statistics courses tend to rush to probabilistic applications involving risk and confidence. Rarely does the first level course linger on such useful and fascinating topics as time series decomposition, with its practical applications…

  11. Introduction to the Special Series: Current Directions for Measuring Parenting Constructs to Inform Prevention Science.

    PubMed

    Lindhiem, Oliver; Shaffer, Anne

    2017-04-01

    Parenting behaviors are multifaceted and dynamic and therefore challenging to quantify. Measurement methods have critical implications for study results, particularly for prevention trials designed to modify parenting behaviors. Although multiple approaches can complement one another and contribute to a more complete understanding of prevention trials, the assumptions and implications of each approach are not always clearly addressed. Greater attention to the measurement of complex constructs such as parenting is needed to advance the field of prevention science. This series examines the challenges of measuring changes in parenting behaviors in the context of prevention trials. All manuscripts in the special series address measurement issues and make practical recommendations for prevention researchers. Manuscripts in this special series include (1) empirical studies that demonstrate novel measurement approaches, (2) re-analyses of prevention trial outcome data directly comparing and contrasting two or more methods, and (3) a statistical primer and practical guide to analyzing proportion data.

  12. A general statistical test for correlations in a finite-length time series.

    PubMed

    Hanson, Jeffery A; Yang, Haw

    2008-06-07

    The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.

  13. [Quality of clinical studies published in the RBGO over one decade (1999-2009): methodological and ethical aspects and statistical procedures].

    PubMed

    de Sá, Joceline Cássia Ferezini; Marini, Gabriela; Gelaleti, Rafael Bottaro; da Silva, João Batista; de Azevedo, George Gantas; Rudge, Marilza Vieira Cunha

    2013-11-01

    To evaluate the methodological and statistical design evolution of the publications in the Brazilian Journal of Gynecology and Obstetrics (RBGO) from resolution 196/96. A review of 133 articles published in 1999 (65) and 2009 (68) was performed by two independent reviewers with training in clinical epidemiology and methodology of scientific research. We included all original clinical articles, case and series reports and excluded editorials, letters to the editor, systematic reviews, experimental studies, opinion articles, besides abstracts of theses and dissertations. Characteristics related to the methodological quality of the studies were analyzed in each article using a checklist that evaluated two criteria: methodological aspects and statistical procedures. We used descriptive statistics and the χ2 test for comparison of the two years. There was a difference between 1999 and 2009 regarding the study and statistical design, with more accuracy in the procedures and the use of more robust tests between 1999 and 2009. In RBGO, we observed an evolution in the methods of published articles and a more in-depth use of the statistical analyses, with more sophisticated tests such as regression and multilevel analyses, which are essential techniques for the knowledge and planning of health interventions, leading to fewer interpretation errors.

  14. Design of a testing strategy using non-animal based test methods: lessons learnt from the ACuteTox project.

    PubMed

    Kopp-Schneider, Annette; Prieto, Pilar; Kinsner-Ovaskainen, Agnieszka; Stanzel, Sven

    2013-06-01

    In the framework of toxicology, a testing strategy can be viewed as a series of steps which are taken to come to a final prediction about a characteristic of a compound under study. The testing strategy is performed as a single-step procedure, usually called a test battery, using simultaneously all information collected on different endpoints, or as tiered approach in which a decision tree is followed. Design of a testing strategy involves statistical considerations, such as the development of a statistical prediction model. During the EU FP6 ACuteTox project, several prediction models were proposed on the basis of statistical classification algorithms which we illustrate here. The final choice of testing strategies was not based on statistical considerations alone. However, without thorough statistical evaluations a testing strategy cannot be identified. We present here a number of observations made from the statistical viewpoint which relate to the development of testing strategies. The points we make were derived from problems we had to deal with during the evaluation of this large research project. A central issue during the development of a prediction model is the danger of overfitting. Procedures are presented to deal with this challenge. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. A method to reconstruct long precipitation series using systematic descriptive observations in weather diaries: the example of the precipitation series for Bern, Switzerland (1760-2003)

    NASA Astrophysics Data System (ADS)

    Gimmi, U.; Luterbacher, J.; Pfister, C.; Wanner, H.

    2007-01-01

    In contrast to barometric and thermometric records, early instrumental precipitation series are quite rare. Based on systematic descriptive daily records, a quantitative monthly precipitation series for Bern (Switzerland) was reconstructed back to the year 1760 (reconstruction based on documentary evidence). Since every observer had his own personal style to fill out his diary, the main focus was to avoid observer-specific bias in the reconstruction. An independent statistical monthly precipitation reconstruction was performed using instrumental data from European sites. Over most periods the reconstruction based on documentary evidence lies inside the 2 standard errors of the statistical estimates. The comparison between these two approaches enables an independent verification and a reliable error estimate. The analysis points to below normal rainfall totals in all seasons during the late 18th century and in the 1820s and 1830s. Increased precipitation occurred in the early 1850s and the late 1870s, particularly from spring to autumn. The annual precipitation totals generally tend to be higher in the 20th century than in the late 18th and 19th century. Precipitation changes are discussed in the context of socioeconomic impacts and Alpine glacier dynamics. The conceptual design of the reconstruction procedure is aimed at application for similar descriptive precipitation series, which are known to be abundant from the mid-18th century in Europe and the U.S.

  16. Statistics based sampling for controller and estimator design

    NASA Astrophysics Data System (ADS)

    Tenne, Dirk

    The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.

  17. Unit and Item Response, Weighting, and Imputation Procedures in the 1995 National Household Education Survey (NHES:95). Working Paper Series.

    ERIC Educational Resources Information Center

    Brick, J. Michael; Broene, Pam

    The National Household Education Survey (NHES) is a data collection effort of the National Center for Education Statistics that collects and publishes data on the condition of education in the United States. The NHES is designed to provide information on issues that are best addressed by contacting households rather than institutions. It is a…

  18. A Comparative Study of Family Planning Service Statistics Systems in the ESCAP Region. Asian Population Studies Series No. 15.

    ERIC Educational Resources Information Center

    United Nations Economic and Social Commission for Asia and the Pacific, Bangkok (Thailand).

    This monograph contains a study conducted by the Population Division of the United Nations Economic and Social Committee for Asia and the Pacific (ESCAP). The document is designed to aid policy-makers, administrators and evaluation personnel in family planning programs in the ESCAP region, primarily; and researchers working in the field of family…

  19. Understanding characteristics in multivariate traffic flow time series from complex network structure

    NASA Astrophysics Data System (ADS)

    Yan, Ying; Zhang, Shen; Tang, Jinjun; Wang, Xiaofei

    2017-07-01

    Discovering dynamic characteristics in traffic flow is the significant step to design effective traffic managing and controlling strategy for relieving traffic congestion in urban cities. A new method based on complex network theory is proposed to study multivariate traffic flow time series. The data were collected from loop detectors on freeway during a year. In order to construct complex network from original traffic flow, a weighted Froenius norm is adopt to estimate similarity between multivariate time series, and Principal Component Analysis is implemented to determine the weights. We discuss how to select optimal critical threshold for networks at different hour in term of cumulative probability distribution of degree. Furthermore, two statistical properties of networks: normalized network structure entropy and cumulative probability of degree, are utilized to explore hourly variation in traffic flow. The results demonstrate these two statistical quantities express similar pattern to traffic flow parameters with morning and evening peak hours. Accordingly, we detect three traffic states: trough, peak and transitional hours, according to the correlation between two aforementioned properties. The classifying results of states can actually represent hourly fluctuation in traffic flow by analyzing annual average hourly values of traffic volume, occupancy and speed in corresponding hours.

  20. Do regional methods really help reduce uncertainties in flood frequency analyses?

    NASA Astrophysics Data System (ADS)

    Cong Nguyen, Chi; Payrastre, Olivier; Gaume, Eric

    2013-04-01

    Flood frequency analyses are often based on continuous measured series at gauge sites. However, the length of the available data sets is usually too short to provide reliable estimates of extreme design floods. To reduce the estimation uncertainties, the analyzed data sets have to be extended either in time, making use of historical and paleoflood data, or in space, merging data sets considered as statistically homogeneous to build large regional data samples. Nevertheless, the advantage of the regional analyses, the important increase of the size of the studied data sets, may be counterbalanced by the possible heterogeneities of the merged sets. The application and comparison of four different flood frequency analysis methods to two regions affected by flash floods in the south of France (Ardèche and Var) illustrates how this balance between the number of records and possible heterogeneities plays in real-world applications. The four tested methods are: (1) a local statistical analysis based on the existing series of measured discharges, (2) a local analysis valuating the existing information on historical floods, (3) a standard regional flood frequency analysis based on existing measured series at gauged sites and (4) a modified regional analysis including estimated extreme peak discharges at ungauged sites. Monte Carlo simulations are conducted to simulate a large number of discharge series with characteristics similar to the observed ones (type of statistical distributions, number of sites and records) to evaluate to which extent the results obtained on these case studies can be generalized. These two case studies indicate that even small statistical heterogeneities, which are not detected by the standard homogeneity tests implemented in regional flood frequency studies, may drastically limit the usefulness of such approaches. On the other hand, these result show that the valuation of information on extreme events, either historical flood events at gauged sites or estimated extremes at ungauged sites in the considered region, is an efficient way to reduce uncertainties in flood frequency studies.

  1. A Study on Predictive Analytics Application to Ship Machinery Maintenance

    DTIC Science & Technology

    2013-09-01

    Looking at the nature of the time series forecasting method , it would be better applied to offline analysis . The application for real- time online...other system attributes in future. Two techniques of statistical analysis , mainly time series models and cumulative sum control charts, are discussed in...statistical tool employed for the two techniques of statistical analysis . Both time series forecasting as well as CUSUM control charts are shown to be

  2. Statistical Inference on Memory Structure of Processes and Its Applications to Information Theory

    DTIC Science & Technology

    2016-05-12

    valued times series from a sample. (A practical algorithm to compute the estimator is a work in progress.) Third, finitely-valued spatial processes...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 mathematical statistics; time series ; Markov chains; random...proved. Second, a statistical method is developed to estimate the memory depth of discrete- time and continuously-valued times series from a sample. (A

  3. Wavelet analysis in ecology and epidemiology: impact of statistical tests

    PubMed Central

    Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario

    2014-01-01

    Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the ‘beta-surrogate’ method. PMID:24284892

  4. Wavelet analysis in ecology and epidemiology: impact of statistical tests.

    PubMed

    Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario

    2014-02-06

    Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the 'beta-surrogate' method.

  5. Statistical analysis of long-term monitoring data for persistent organic pollutants in the atmosphere at 20 monitoring stations broadly indicates declining concentrations.

    PubMed

    Kong, Deguo; MacLeod, Matthew; Hung, Hayley; Cousins, Ian T

    2014-11-04

    During recent decades concentrations of persistent organic pollutants (POPs) in the atmosphere have been monitored at multiple stations worldwide. We used three statistical methods to analyze a total of 748 time series of selected POPs in the atmosphere to determine if there are statistically significant reductions in levels of POPs that have had control actions enacted to restrict or eliminate manufacture, use and emissions. Significant decreasing trends were identified in 560 (75%) of the 748 time series collected from the Arctic, North America, and Europe, indicating that the atmospheric concentrations of these POPs are generally decreasing, consistent with the overall effectiveness of emission control actions. Statistically significant trends in synthetic time series could be reliably identified with the improved Mann-Kendall (iMK) test and the digital filtration (DF) technique in time series longer than 5 years. The temporal trends of new (or emerging) POPs in the atmosphere are often unclear because time series are too short. A statistical detrending method based on the iMK test was not able to identify abrupt changes in the rates of decline of atmospheric POP concentrations encoded into synthetic time series.

  6. A powerful approach for association analysis incorporating imprinting effects

    PubMed Central

    Xia, Fan; Zhou, Ji-Yuan; Fung, Wing Kam

    2011-01-01

    Motivation: For a diallelic marker locus, the transmission disequilibrium test (TDT) is a simple and powerful design for genetic studies. The TDT was originally proposed for use in families with both parents available (complete nuclear families) and has further been extended to 1-TDT for use in families with only one of the parents available (incomplete nuclear families). Currently, the increasing interest of the influence of parental imprinting on heritability indicates the importance of incorporating imprinting effects into the mapping of association variants. Results: In this article, we extend the TDT-type statistics to incorporate imprinting effects and develop a series of new test statistics in a general two-stage framework for association studies. Our test statistics enjoy the nature of family-based designs that need no assumption of Hardy–Weinberg equilibrium. Also, the proposed methods accommodate complete and incomplete nuclear families with one or more affected children. In the simulation study, we verify the validity of the proposed test statistics under various scenarios, and compare the powers of the proposed statistics with some existing test statistics. It is shown that our methods greatly improve the power for detecting association in the presence of imprinting effects. We further demonstrate the advantage of our methods by the application of the proposed test statistics to a rheumatoid arthritis dataset. Contact: wingfung@hku.hk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21798962

  7. A powerful approach for association analysis incorporating imprinting effects.

    PubMed

    Xia, Fan; Zhou, Ji-Yuan; Fung, Wing Kam

    2011-09-15

    For a diallelic marker locus, the transmission disequilibrium test (TDT) is a simple and powerful design for genetic studies. The TDT was originally proposed for use in families with both parents available (complete nuclear families) and has further been extended to 1-TDT for use in families with only one of the parents available (incomplete nuclear families). Currently, the increasing interest of the influence of parental imprinting on heritability indicates the importance of incorporating imprinting effects into the mapping of association variants. In this article, we extend the TDT-type statistics to incorporate imprinting effects and develop a series of new test statistics in a general two-stage framework for association studies. Our test statistics enjoy the nature of family-based designs that need no assumption of Hardy-Weinberg equilibrium. Also, the proposed methods accommodate complete and incomplete nuclear families with one or more affected children. In the simulation study, we verify the validity of the proposed test statistics under various scenarios, and compare the powers of the proposed statistics with some existing test statistics. It is shown that our methods greatly improve the power for detecting association in the presence of imprinting effects. We further demonstrate the advantage of our methods by the application of the proposed test statistics to a rheumatoid arthritis dataset. wingfung@hku.hk Supplementary data are available at Bioinformatics online.

  8. Time Series Model Identification by Estimating Information.

    DTIC Science & Technology

    1982-11-01

    principle, Applications of Statistics, P. R. Krishnaiah , ed., North-Holland: Amsterdam, 27-41. Anderson, T. W. (1971). The Statistical Analysis of Time Series...E. (1969). Multiple Time Series Modeling, Multivariate Analysis II, edited by P. Krishnaiah , Academic Press: New York, 389-409. Parzen, E. (1981...Newton, H. J. (1980). Multiple Time Series Modeling, II Multivariate Analysis - V, edited by P. Krishnaiah , North Holland: Amsterdam, 181-197. Shibata, R

  9. A statistical approach for generating synthetic tip stress data from limited CPT soundings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basalams, M.K.

    CPT tip stress data obtained from a Uranium mill tailings impoundment are treated as time series. A statistical class of models that was developed to model time series is explored to investigate its applicability in modeling the tip stress series. These models were developed by Box and Jenkins (1970) and are known as Autoregressive Moving Average (ARMA) models. This research demonstrates how to apply the ARMA models to tip stress series. Generation of synthetic tip stress series that preserve the main statistical characteristics of the measured series is also investigated. Multiple regression analysis is used to model the regional variationmore » of the ARMA model parameters as well as the regional variation of the mean and the standard deviation of the measured tip stress series. The reliability of the generated series is investigated from a geotechnical point of view as well as from a statistical point of view. Estimation of the total settlement using the measured and the generated series subjected to the same loading condition are performed. The variation of friction angle with depth of the impoundment materials is also investigated. This research shows that these series can be modeled by the Box and Jenkins ARMA models. A third degree Autoregressive model AR(3) is selected to represent these series. A theoretical double exponential density function is fitted to the AR(3) model residuals. Synthetic tip stress series are generated at nearby locations. The generated series are shown to be reliable in estimating the total settlement and the friction angle variation with depth for this particular site.« less

  10. Effects of alcohol tax increases on alcohol-related disease mortality in Alaska: time-series analyses from 1976 to 2004.

    PubMed

    Wagenaar, Alexander C; Maldonado-Molina, Mildred M; Wagenaar, Bradley H

    2009-08-01

    We evaluated the effects of tax increases on alcoholic beverages in 1983 and 2002 on alcohol-related disease mortality in Alaska. We used a quasi-experimental design with quarterly measures of mortality from 1976 though 2004, and we included other states for comparison. Our statistical approach combined an autoregressive integrated moving average model with structural parameters in interrupted time-series models. We observed statistically significant reductions in the numbers and rates of deaths caused by alcohol-related disease beginning immediately after the 1983 and 2002 alcohol tax increases in Alaska. In terms of effect size, the reductions were -29% (Cohen's d = -0.57) and -11% (Cohen's d = -0.52) for the 2 tax increases. Statistical tests of temporary-effect models versus long-term-effect models showed little dissipation of the effect over time. Increases in alcohol excise tax rates were associated with immediate and sustained reductions in alcohol-related disease mortality in Alaska. Reductions in mortality occurred after 2 tax increases almost 20 years apart. Taxing alcoholic beverages is an effective public health strategy for reducing the burden of alcohol-related disease.

  11. Comparison of different synthetic 5-min rainfall time series regarding their suitability for urban drainage modelling

    NASA Astrophysics Data System (ADS)

    van der Heijden, Sven; Callau Poduje, Ana; Müller, Hannes; Shehu, Bora; Haberlandt, Uwe; Lorenz, Manuel; Wagner, Sven; Kunstmann, Harald; Müller, Thomas; Mosthaf, Tobias; Bárdossy, András

    2015-04-01

    For the design and operation of urban drainage systems with numerical simulation models, long, continuous precipitation time series with high temporal resolution are necessary. Suitable observed time series are rare. As a result, intelligent design concepts often use uncertain or unsuitable precipitation data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic precipitation data for urban drainage modelling are advanced, tested, and compared. The presented study compares four different approaches of precipitation models regarding their ability to reproduce rainfall and runoff characteristics. These include one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model, and one disaggregation approach based on daily precipitation measurements. All four models produce long precipitation time series with a temporal resolution of five minutes. The synthetic time series are first compared to observed rainfall reference time series. Comparison criteria include event based statistics like mean dry spell and wet spell duration, wet spell amount and intensity, long term means of precipitation sum and number of events, and extreme value distributions for different durations. Then they are compared regarding simulated discharge characteristics using an urban hydrological model on a fictitious sewage network. First results show a principal suitability of all rainfall models but with different strengths and weaknesses regarding the different rainfall and runoff characteristics considered.

  12. Nonlinear heart rate variability measures under electromagnetic fields produced by GSM cellular phones.

    PubMed

    Parazzini, Marta; Ravazzani, Paolo; Thuroczy, György; Molnar, Ferenc B; Ardesi, Gianluca; Sacchettini, Alessio; Mainardi, Luca Tommaso

    2013-06-01

    This study was designed to assess the nonlinear dynamics of heart rate variability (HRV) during exposure to low-intensity EMFs. Twenty-six healthy young volunteers were subjected to a rest-to-stand protocol to evaluate autonomic nervous system in quiet condition (rest, vagal prevalence) and after a sympathetic activation (stand). The procedure was conducted twice in a double-blind design: once with a genuine EMFs exposure (GSM cellular phone at 900 MHz, 2 W) and once with a sham exposure (at least 24 h apart). During each session, three-lead electrocardiograms were recorded and RR series extracted off-line. The RR series were analyzed by nonlinear deterministic techniques in every phase of the protocol and during the different exposures. The analysis of the data shows there was no statistically significant effect due to GSM exposure on the nonlinear dynamics of HRV.

  13. A tale of two audits: statistical process control for improving diabetes care in primary care settings.

    PubMed

    Al-Hussein, Fahad Abdullah

    2008-01-01

    Diabetes constitutes a major burden of disease globally. Both primary and secondary prevention need to improve in order to face this challenge. Improving management of diabetes in primary care is therefore of fundamental importance. The objective of these series of audits was to find means of improving diabetes management in chronic disease mini-clinics in primary health care. In the process, we were able to study the effect and practical usefulness of different audit designs - those measuring clinical outcomes, process of care, or both. King Saud City Family and Community Medicine Centre, Saudi National Guard Health Affairs in Riyadh city, Saudi Arabia. Simple random samples of 30 files were selected every two weeks from a sampling frame of file numbers for all diabetes clients seen over the period. Information was transferred to a form, entered on the computer and an automated response was generated regarding the appropriateness of management, a criterion mutually agreed upon by care providers. The results were plotted on statistical process control charts, p charts, displayed for all employees. Data extraction, archiving, entry, analysis, plotting and design and preparation of p charts were managed by nursing staff specially trained for the purpose by physicians with relevant previous experience. Audit series with mixed outcome and process measures failed to detect any changes in the proportion of non-conforming cases over a period of one year. The process measures series, on the other hand, showed improvement in care corresponding to a reduction in the proportion non-conforming by 10% within a period of 3 months. Non-conformities dropped from a mean of 5.0 to 1.4 over the year (P < 0.001). It is possible to improve providers' behaviour regarding implementation of given guidelines through periodic process audits and feedbacks. Frequent process audits in the context of statistical process control should be supplemented with concurrent outcome audits, once or twice a year.

  14. Empirical performance of the self-controlled case series design: lessons for developing a risk identification and analysis system.

    PubMed

    Suchard, Marc A; Zorych, Ivan; Simpson, Shawn E; Schuemie, Martijn J; Ryan, Patrick B; Madigan, David

    2013-10-01

    The self-controlled case series (SCCS) offers potential as an statistical method for risk identification involving medical products from large-scale observational healthcare data. However, analytic design choices remain in encoding the longitudinal health records into the SCCS framework and its risk identification performance across real-world databases is unknown. To evaluate the performance of SCCS and its design choices as a tool for risk identification in observational healthcare data. We examined the risk identification performance of SCCS across five design choices using 399 drug-health outcome pairs in five real observational databases (four administrative claims and one electronic health records). In these databases, the pairs involve 165 positive controls and 234 negative controls. We also consider several synthetic databases with known relative risks between drug-outcome pairs. We evaluate risk identification performance through estimating the area under the receiver-operator characteristics curve (AUC) and bias and coverage probability in the synthetic examples. The SCCS achieves strong predictive performance. Twelve of the twenty health outcome-database scenarios return AUCs >0.75 across all drugs. Including all adverse events instead of just the first per patient and applying a multivariate adjustment for concomitant drug use are the most important design choices. However, the SCCS as applied here returns relative risk point-estimates biased towards the null value of 1 with low coverage probability. The SCCS recently extended to apply a multivariate adjustment for concomitant drug use offers promise as a statistical tool for risk identification in large-scale observational healthcare databases. Poor estimator calibration dampens enthusiasm, but on-going work should correct this short-coming.

  15. Library Statistics of Colleges and Universities, 1963-1964. Analytic Report.

    ERIC Educational Resources Information Center

    Samore, Theodore

    The series of analytic reports on management and salary data of the academic libraries, paralleling the series titled "Library Statistics of Colleges and Universities, Institutional Data," is continued by this publication. The statistical tables of this report are of value to administrators, librarians, and others because: (1) they help…

  16. Contribution of artificial intelligence to the knowledge of prognostic factors in laryngeal carcinoma.

    PubMed

    Zapater, E; Moreno, S; Fortea, M A; Campos, A; Armengot, M; Basterra, J

    2000-11-01

    Many studies have investigated prognostic factors in laryngeal carcinoma, with sometimes conflicting results. Apart from the importance of environmental factors, the different statistical methods employed may have influenced such discrepancies. A program based on artificial intelligence techniques is designed to determine the prognostic factors in a series of 122 laryngeal carcinomas. The results obtained are compared with those derived from two classical statistical methods (Cox regression and mortality tables). Tumor location was found to be the most important prognostic factor by all methods. The proposed intelligent system is found to be a sound method capable of detecting exceptional cases.

  17. Clinical and Radiographic Mid-Term Outcomes After Total Shoulder Replacement: A Retrospective Study Protocol Including 400 Anatomical and Reverse Prosthetic Implants

    PubMed Central

    Merolla, Giovanni; Tartarone, Antonio; Porcellini, Giuseppe

    2016-01-01

    Objectives: To obtain outcomes data on anatomical and reverse total shoulder arthroplasty by analysis of clinical scores and standard radiographs. Subject selection and enrollment: 400 consecutive series of patients replaced with anatomical and reverse total shoulder arthroplasty (minimum 3 years follow-up). Study Design: retrospective monocenter. Preoperative assessment: Demographics, clinical scores (Constant-Murley) as available, shoulder X-ray (AP, outlet and axillary views) . Last follow-up: Postoperative radiographhs and clinical scores. Adverse events and complications to be reported as occurred since implantation. Statistical analysis: Data collected will be summarized and analyzed for statistical significance. PMID:27326389

  18. Views of medical students: what, when and how do they want statistics taught?

    PubMed

    Fielding, S; Poobalan, A; Prescott, G J; Marais, D; Aucott, L

    2015-11-01

    A key skill for a practising clinician is being able to do research, understand the statistical analyses and interpret results in the medical literature. Basic statistics has become essential within medical education, but when, what and in which format is uncertain. To inform curriculum design/development we undertook a quantitative survey of fifth year medical students and followed them up with a series of focus groups to obtain their opinions as to what statistics teaching they want, when and how. A total of 145 students undertook the survey and five focus groups were held with between 3 and 9 participants each. Previous statistical training varied and students recognised their knowledge was inadequate and keen to see additional training implemented. Students were aware of the importance of statistics to their future careers, but apprehensive about learning. Face-to-face teaching supported by online resources was popular. Focus groups indicated the need for statistical training early in their degree and highlighted their lack of confidence and inconsistencies in support. The study found that the students see the importance of statistics training in the medical curriculum but that timing and mode of delivery are key. The findings have informed the design of a new course to be implemented in the third undergraduate year. Teaching will be based around published studies aiming to equip students with the basics required with additional resources available through a virtual learning environment. © The Author(s) 2015.

  19. Advanced Space Shuttle simulation model

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Smith, S. R.

    1982-01-01

    A non-recursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed. It provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gusts gradients. Based on this model the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes, entitled Shuttle Simulation Turbulence Tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 120,000 meters. A description of the turbulence generation procedure is provided. The results of validating the simulated turbulence are described. Conclusions and recommendations are presented. One-dimensional von Karman spectra are tabulated, while a discussion of the minimum frequency simulated is provided. The results of spectral and statistical analyses of the SSTT are presented.

  20. Using a detailed uncertainty analysis to adjust mapped rates of forest disturbance derived from Landsat time series data (Invited)

    NASA Astrophysics Data System (ADS)

    Cohen, W. B.; Yang, Z.; Stehman, S.; Huang, C.; Healey, S. P.

    2013-12-01

    Forest ecosystem process models require spatially and temporally detailed disturbance data to accurately predict fluxes of carbon or changes in biodiversity over time. A variety of new mapping algorithms using dense Landsat time series show great promise for providing disturbance characterizations at an annual time step. These algorithms provide unprecedented detail with respect to timing, magnitude, and duration of individual disturbance events, and causal agent. But all maps have error and disturbance maps in particular can have significant omission error because many disturbances are relatively subtle. Because disturbance, although ubiquitous, can be a relatively rare event spatially in any given year, omission errors can have a great impact on mapped rates. Using a high quality reference disturbance dataset, it is possible to not only characterize map errors but also to adjust mapped disturbance rates to provide unbiased rate estimates with confidence intervals. We present results from a national-level disturbance mapping project (the North American Forest Dynamics project) based on the Vegetation Change Tracker (VCT) with annual Landsat time series and uncertainty analyses that consist of three basic components: response design, statistical design, and analyses. The response design describes the reference data collection, in terms of the tool used (TimeSync), a formal description of interpretations, and the approach for data collection. The statistical design defines the selection of plot samples to be interpreted, whether stratification is used, and the sample size. Analyses involve derivation of standard agreement matrices between the map and the reference data, and use of inclusion probabilities and post-stratification to adjust mapped disturbance rates. Because for NAFD we use annual time series, both mapped and adjusted rates are provided at an annual time step from ~1985-present. Preliminary evaluations indicate that VCT captures most of the higher intensity disturbances, but that many of the lower intensity disturbances (thinnings, stress related to insects and disease, etc.) are missed. Because lower intensity disturbances are a large proportion of the total set of disturbances, adjusting mapped disturbance rates to include these can be important for inclusion in ecosystem process models. The described statistical disturbance rate adjustments are aspatial in nature, such that the basic underlying map is unchanged. For spatially explicit ecosystem modeling, such adjustments, although important, can be difficult to directly incorporate. One approach for improving the basic underlying map is an ensemble modeling approach that uses several different complementary maps, each derived from a different algorithm and having their own strengths and weaknesses relative to disturbance magnitude and causal agent of disturbance. We will present results from a pilot study associated with the Landscape Change Monitoring System (LCMS), an emerging national-level program that builds upon NAFD and the well-established Monitoring Trends in Burn Severity (MTBS) program.

  1. Training Effectiveness Assessment. Volume II. Problems, Concepts, and Evaluation Alternatives.

    DTIC Science & Technology

    1976-12-01

    i nforma ti on abou t areas where course impr ov emer t might be indicated . Percentiles , pretest and posttest scores , or other measures of amount...statistical sophisti- cation. Interpretation of gain scores derived from pretests - posttests of trainees and other forms of trend analysis requires...CPM ), computer - managed testing (CMI). time-series analysi s, pretest / posttest design , and secondary anal ysis. Criterion -referenced measurement is

  2. Family Day Care in the United States: Family Day Care Systems. Final Report of the National Day Care Home Study. Volume 5.

    ERIC Educational Resources Information Center

    Grasso, Janet; Fosburg, Steven

    Fifth in a series of seven volumes reporting the design, methodology, and findings of the 4-year National Day Care Home Study (NDCHS), this volume presents a descriptive and statistical analysis of the day care institutions that administer day care systems. These systems, such as Learning Unlimited in Los Angeles and the family day care program of…

  3. Standards for Clinical Trials in Male and Female Sexual Dysfunction: I. Phase I to Phase IV Clinical Trial Design.

    PubMed

    Fisher, William A; Gruenwald, Ilan; Jannini, Emmanuele A; Lev-Sagie, Ahinoam; Lowenstein, Lior; Pyke, Robert E; Reisman, Yakov; Revicki, Dennis A; Rubio-Aurioles, Eusebio

    2016-12-01

    This series of articles outlines standards for clinical trials of treatments for male and female sexual dysfunctions, with a focus on research design and patient-reported outcome assessment. These articles consist of revision, updating, and integration of articles on standards for clinical trials in male and female sexual dysfunction from the 2010 International Consultation on Sexual Medicine developed by the authors as part of the 2015 International Consultation on Sexual Medicine. We are guided in this effort by several principles. In contrast to previous versions of these guidelines, we merge discussion of standards for clinical trials in male and female sexual dysfunction in an integrated approach that emphasizes the common foundational practices that underlie clinical trials in the two settings. We present a common expected standard for clinical trial design in male and female sexual dysfunction, a common rationale for the design of phase I to IV clinical trials, and common considerations for selection of study population and study duration in male and female sexual dysfunction. We present a focused discussion of fundamental principles in patient- (and partner-) reported outcome assessment and complete this series of articles with specific discussions of selected aspects of clinical trials that are unique to male and to female sexual dysfunction. Our consideration of standards for clinical trials in male and female sexual dysfunction attempts to embody sensitivity to existing and new regulatory guidance and to address implications of the evolution of the diagnosis of sexual dysfunction that have been brought forward in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition. The first article in this series focuses on phase I to phase IV clinical trial design considerations. Subsequent articles in this series focus on the measurement of patient-reported outcomes, unique aspects of clinical trial design for men, and unique aspects of clinical trial design for women. Copyright © 2016 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.

  4. Interrupted Time Series Versus Statistical Process Control in Quality Improvement Projects.

    PubMed

    Andersson Hagiwara, Magnus; Andersson Gäre, Boel; Elg, Mattias

    2016-01-01

    To measure the effect of quality improvement interventions, it is appropriate to use analysis methods that measure data over time. Examples of such methods include statistical process control analysis and interrupted time series with segmented regression analysis. This article compares the use of statistical process control analysis and interrupted time series with segmented regression analysis for evaluating the longitudinal effects of quality improvement interventions, using an example study on an evaluation of a computerized decision support system.

  5. Unraveling multiple changes in complex climate time series using Bayesian inference

    NASA Astrophysics Data System (ADS)

    Berner, Nadine; Trauth, Martin H.; Holschneider, Matthias

    2016-04-01

    Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of observations. Unraveling such transitions yields essential information for the understanding of the observed system. The precise detection and basic characterization of underlying changes is therefore of particular importance in environmental sciences. We present a kernel-based Bayesian inference approach to investigate direct as well as indirect climate observations for multiple generic transition events. In order to develop a diagnostic approach designed to capture a variety of natural processes, the basic statistical features of central tendency and dispersion are used to locally approximate a complex time series by a generic transition model. A Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of such a transition. To systematically investigate time series for multiple changes occurring at different temporal scales, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. Thus, based on a generic transition model a probability expression is derived that is capable to indicate multiple changes within a complex time series. We discuss the method's performance by investigating direct and indirect climate observations. The approach is applied to environmental time series (about 100 a), from the weather station in Tuscaloosa, Alabama, and confirms documented instrumentation changes. Moreover, the approach is used to investigate a set of complex terrigenous dust records from the ODP sites 659, 721/722 and 967 interpreted as climate indicators of the African region of the Plio-Pleistocene period (about 5 Ma). The detailed inference unravels multiple transitions underlying the indirect climate observations coinciding with established global climate events.

  6. Statistical Analysis of CFD Solutions from the Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.

    2002-01-01

    A simple, graphical framework is presented for robust statistical evaluation of results obtained from N-Version testing of a series of RANS CFD codes. The solutions were obtained by a variety of code developers and users for the June 2001 Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic configuration used for the computational tests is the DLR-F4 wing-body combination previously tested in several European wind tunnels and for which a previous N-Version test had been conducted. The statistical framework is used to evaluate code results for (1) a single cruise design point, (2) drag polars and (3) drag rise. The paper concludes with a discussion of the meaning of the results, especially with respect to predictability, Validation, and reporting of solutions.

  7. A high-fidelity weather time series generator using the Markov Chain process on a piecewise level

    NASA Astrophysics Data System (ADS)

    Hersvik, K.; Endrerud, O.-E. V.

    2017-12-01

    A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.

  8. Change Detection in Rough Time Series

    DTIC Science & Technology

    2014-09-01

    Business Statistics : An Inferential Approach, Dellen: San Francisco. [18] Winston, W. (1997) Operations Research Applications and Algorithms, Duxbury...distribution that can present significant challenges to conventional statistical tracking techniques. To address this problem the proposed method...applies hybrid fuzzy statistical techniques to series granules instead of to individual measures. Three examples demonstrated the robust nature of the

  9. Xenogeneic Collagen Matrix Versus Connective Tissue Graft: Case Series of Various Gingival Recession Treatments.

    PubMed

    Chevalier, Grégoire; Cherkaoui, Selma; Kruk, Hanna; Bensaïd, Xavier; Danan, Marc

    A xenogeneic collagen matrix recently has been suggested as an alternative to connective tissue graft for the treatment of gingival recession. The matrix avoids the second surgical site, and as a consequence could decrease surgical morbidity. This new matrix was used in various clinical situations and compared to connective tissue graft (CTG) in a split-mouth design case series. A total of 17 recessions were treated with a coronally advanced flap, 9 with CTG, and 8 with the matrix. Mean recession reduction was 2.00 mm with the CTG and 2.00 mm with the matrix. No significant statistical differences between the techniques were observed in this case report.

  10. Xenogeneic Collagen Matrix Versus Connective Tissue Graft: Case Series of Various Gingival Recession Treatments.

    PubMed

    Chevalier, Grégoire; Cherkaoui, Selma; Kruk, Hanna; Bensaïd, Xavier; Danan, Marc

    2016-08-24

    A xenogeneic collagen matrix recently has been suggested as an alternative to connective tissue graft for the treatment of gingival recession. The matrix avoids the second surgical site, and as a consequence could decrease surgical morbidity. This new matrix was used in various clinical situations and compared to connective tissue graft (CTG) in a split-mouth design case series. A total of 17 recessions were treated with a coronally advanced flap, 9 with CTG, and 8 with the matrix. Mean recession reduction was 2.00 mm with the CTG and 2.00 mm with the matrix. No significant statistical differences between the techniques were observed in this case report.

  11. A Computer Evolution in Teaching Undergraduate Time Series

    ERIC Educational Resources Information Center

    Hodgess, Erin M.

    2004-01-01

    In teaching undergraduate time series courses, we have used a mixture of various statistical packages. We have finally been able to teach all of the applied concepts within one statistical package; R. This article describes the process that we use to conduct a thorough analysis of a time series. An example with a data set is provided. We compare…

  12. Personal Docente del Nivel Primario. Series Estadisticas Basicas, Nivel Educativo: Cordoba (Teaching Personnel in Primary Schools. Basic Statistics Series , Level of Education: Cordoba).

    ERIC Educational Resources Information Center

    Ministerio de Educacion Nacional, Bogota (Colombia). Instituto Colombiano de Pedagogia.

    This document provides statistical data on the distribution and education of teaching personnel working the elementary schools of Cordoba, Colombia, between 1958 and 1967. The statistics cover the number of men and women, public and private schools, urban and rural location, and the amount of education of the teachers. For overall statistics in…

  13. Personal Docente des Nivel Primario. Series Estadisticas Basicas, Nivel Educativo: Narino (Teaching Personnel in Primary Schools. Basic Statistics Series, Level of Education: Narino).

    ERIC Educational Resources Information Center

    Ministerio de Educacion Nacional, Bogota (Colombia). Instituto Colombiano de Pedagogia.

    This document provides statistical data on the distribution and education of teaching personnel working in the elementary schools of Narino, Colombia, between 1958 and 1967. The statistics cover the number of men and women, public and private schools, urban and rural location, and the amount of education of the teachers. For overall statistics in…

  14. Personal Docente del Nivel Primario. Series Estadisticas Basicas, Nivel Educativo: Cauca (Teaching Personnel in Primary Schools. Basic Statistics Series, Level of Education: Cauca).

    ERIC Educational Resources Information Center

    Ministerio de Educacion Nacional, Bogota (Colombia). Instituto Colombiano de Pedagogia.

    This document provides statistical data on the distribution and education of teaching personnel working in the elementary schools of Cauca, Colombia, between 1958 and 1967. The statistics cover the number of men and women, public and private schools, urban and rural location, and the amount of education of the teachers. For overall statistics in…

  15. Personal Docente del Nivel Primario. Series Estadisticas Basicas, Nivel Educativo: Caldas (Teaching Personnel in Primary Schools. Basic Statistics Series, Level of Education: Caldas).

    ERIC Educational Resources Information Center

    Ministerio de Educacion Nacional, Bogota (Colombia). Instituto Colombiano de Pedagogia.

    This document provides statistical data on the distribution and education of teaching personnel working in the elementary schools of Caldas, Colombia, between 1958 and 1967. The statistics cover the number of men and women, public and private schools, urban and rural location, and the amount of education of the teachers. For overall statistics in…

  16. Personal Docente del Nivel Primario. Series Estadisticas Basicas, Nivel Educativo: Boyaca (Teaching Personnel in Primary Schools. Basic Statistics Series, Level of Education: Boyaca).

    ERIC Educational Resources Information Center

    Ministerio de Educacion Nacional, Bogota (Colombia). Instituto Colombiano de Pedagogia.

    This document provides statistical data on the distribution and education of teaching personnel working in the elementary schools of Boyaca, Colombia, between 1958 and 1967. The statistics cover the number of men and women, public and private schools, urban and rural location, and the amount of education of the teachers. For overall statistics in…

  17. Personal Docente del Nivel Primario. Series Estadisticas Basicas, Nivel Educativo: Huila (Teaching Personnel in Primary Schools. Basic Statistics Series, Level of Education: Huila).

    ERIC Educational Resources Information Center

    Ministerio de Educacion Nacional, Bogota (Colombia). Instituto Colombiano de Pedagogia.

    This document provides statistical data on the distribution and education of teaching personnel working in the elementary schools of Huila, Colombia, between 1958 and 1967. The statistics cover the number of men and women, public and private schools, urban and rural location, and the amount of education of the teachers. For overall statistics in…

  18. Improving surveillance for injuries associated with potential motor vehicle safety defects

    PubMed Central

    Whitfield, R; Whitfield, A

    2004-01-01

    Objective: To improve surveillance for deaths and injuries associated with potential motor vehicle safety defects. Design: Vehicles in fatal crashes can be studied for indications of potential defects using an "early warning" surveillance statistic previously suggested for screening reports of adverse drug reactions. This statistic is illustrated with time series data for fatal, tire related and fire related crashes. Geographic analyses are used to augment the tire related statistics. Results: A statistical criterion based on the Poisson distribution that tests the likelihood of an expected number of events, given the number of events that actually occurred, is a promising method that can be readily adapted for use in injury surveillance. Conclusions: Use of the demonstrated techniques could have helped to avert a well known injury surveillance failure. This method is adaptable to aid in the direction of engineering and statistical reviews to prevent deaths and injuries associated with potential motor vehicle safety defects using available databases. PMID:15066972

  19. Health Statistics NSW: getting the right balance between privacy and small numbers in a web-based reporting system.

    PubMed

    Scandol, James P; Moore, Helen A

    2012-01-01

    Health Statistics NSW is a new web-based application developed by the Centre for Epidemiology and Research at the NSW Ministry of Health. The application is designed to be an efficient vehicle for the timely delivery of health statistics to a diverse audience including the general public, health planners, researchers, students and policy analysts. The development and implementation of this web application required the consideration of a series of competing demands such as: the public interest in providing health data while maintaining the privacy interests of the individuals whose health is being reported; reporting data at spatial scales of relevance to health planners while maintaining the statistical integrity of any inferences drawn; the use of hardware and software systems which are publicly accessible, scalable and robust, while ensuring high levels of security. These three competing demands and the relationships between them are discussed in the context of Health Statistics NSW.

  20. The impact of early discharge laws on the health of newborns.

    PubMed

    Evans, William N; Garthwaite, Craig; Wei, Heng

    2008-07-01

    Using an interrupted time series design and a census of births in California over a 6-year period, we show that state and federal laws passed in the late 1990s designed to increase the length of postpartum hospital stays reduced considerably the fraction of newborns that were discharged early. The law had little impact on re-admission rates for privately insured, vaginally delivered newborns, but reduced re-admission rates for privately insured c-section-delivered and Medicaid-insured vaginally delivered newborns by statistically significant amounts. Our calculations suggest the program was not cost saving.

  1. Designing Studies That Would Address the Multilayered Nature of Health Care

    PubMed Central

    Pennell, Michael; Rhoda, Dale; Hade, Erinn M.; Paskett, Electra D.

    2010-01-01

    We review design and analytic methods available for multilevel interventions in cancer research with particular attention to study design, sample size requirements, and potential to provide statistical evidence for causal inference. The most appropriate methods will depend on the stage of development of the research and whether randomization is possible. Early on, fractional factorial designs may be used to screen intervention components, particularly when randomization of individuals is possible. Quasi-experimental designs, including time-series and multiple baseline designs, can be useful once the intervention is designed because they require few sites and can provide the preliminary evidence to plan efficacy studies. In efficacy and effectiveness studies, group-randomized trials are preferred when randomization is possible and regression discontinuity designs are preferred otherwise if assignment based on a quantitative score is possible. Quasi-experimental designs may be used, especially when combined with recent developments in analytic methods to reduce bias in effect estimates. PMID:20386057

  2. Record statistics of financial time series and geometric random walks

    NASA Astrophysics Data System (ADS)

    Sabir, Behlool; Santhanam, M. S.

    2014-09-01

    The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data.

  3. Personal Docente del Nivel Primario. Series Estadisticas Basicas: Colombia (Teaching Personnel in Primary Schools. Basic Statistics Series: Colombia).

    ERIC Educational Resources Information Center

    Ministerio de Educacion Nacional, Bogota (Colombia). Instituto Colombiano de Pedagogia.

    This document provides statistical data on the distribution and education of teacher personnel working in Colombian elementary schools between 1940 and 1968. The statistics cover the number of men and women, public and private schools, urban and rural location, and the amount of education of teachers. (VM)

  4. Explorations in Statistics: Standard Deviations and Standard Errors

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2008-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This series in "Advances in Physiology Education" provides an opportunity to do just that: we will investigate basic concepts in statistics using the free software package R. Because this series uses R solely as a vehicle…

  5. Experimental design in chemistry: A tutorial.

    PubMed

    Leardi, Riccardo

    2009-10-12

    In this tutorial the main concepts and applications of experimental design in chemistry will be explained. Unfortunately, nowadays experimental design is not as known and applied as it should be, and many papers can be found in which the "optimization" of a procedure is performed one variable at a time. Goal of this paper is to show the real advantages in terms of reduced experimental effort and of increased quality of information that can be obtained if this approach is followed. To do that, three real examples will be shown. Rather than on the mathematical aspects, this paper will focus on the mental attitude required by experimental design. The readers being interested to deepen their knowledge of the mathematical and algorithmical part can find very good books and tutorials in the references [G.E.P. Box, W.G. Hunter, J.S. Hunter, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, John Wiley & Sons, New York, 1978; R. Brereton, Chemometrics: Data Analysis for the Laboratory and Chemical Plant, John Wiley & Sons, New York, 1978; R. Carlson, J.E. Carlson, Design and Optimization in Organic Synthesis: Second Revised and Enlarged Edition, in: Data Handling in Science and Technology, vol. 24, Elsevier, Amsterdam, 2005; J.A. Cornell, Experiments with Mixtures: Designs, Models and the Analysis of Mixture Data, in: Series in Probability and Statistics, John Wiley & Sons, New York, 1991; R.E. Bruns, I.S. Scarminio, B. de Barros Neto, Statistical Design-Chemometrics, in: Data Handling in Science and Technology, vol. 25, Elsevier, Amsterdam, 2006; D.C. Montgomery, Design and Analysis of Experiments, 7th edition, John Wiley & Sons, Inc., 2009; T. Lundstedt, E. Seifert, L. Abramo, B. Thelin, A. Nyström, J. Pettersen, R. Bergman, Chemolab 42 (1998) 3; Y. Vander Heyden, LC-GC Europe 19 (9) (2006) 469].

  6. A Review of Some Aspects of Robust Inference for Time Series.

    DTIC Science & Technology

    1984-09-01

    REVIEW OF SOME ASPECTSOF ROBUST INFERNCE FOR TIME SERIES by Ad . Dougla Main TE "iAL REPOW No. 63 Septermber 1984 Department of Statistics University of ...clear. One cannot hope to have a good method for dealing with outliers in time series by using only an instantaneous nonlinear transformation of the data...AI.49 716 A REVIEWd OF SOME ASPECTS OF ROBUST INFERENCE FOR TIME 1/1 SERIES(U) WASHINGTON UNIV SEATTLE DEPT OF STATISTICS R D MARTIN SEP 84 TR-53

  7. Influence of climate variability versus change at multi-decadal time scales on hydrological extremes

    NASA Astrophysics Data System (ADS)

    Willems, Patrick

    2014-05-01

    Recent studies have shown that rainfall and hydrological extremes do not randomly occur in time, but are subject to multidecadal oscillations. In addition to these oscillations, there are temporal trends due to climate change. Design statistics, such as intensity-duration-frequency (IDF) for extreme rainfall or flow-duration-frequency (QDF) relationships, are affected by both types of temporal changes (short term and long term). This presentation discusses these changes, how they influence water engineering design and decision making, and how this influence can be assessed and taken into account in practice. The multidecadal oscillations in rainfall and hydrological extremes were studied based on a technique for the identification and analysis of changes in extreme quantiles. The statistical significance of the oscillations was evaluated by means of a non-parametric bootstrapping method. Oscillations in large scale atmospheric circulation were identified as the main drivers for the temporal oscillations in rainfall and hydrological extremes. They also explain why spatial phase shifts (e.g. north-south variations in Europe) exist between the oscillation highs and lows. Next to the multidecadal climate oscillations, several stations show trends during the most recent decades, which may be attributed to climate change as a result of anthropogenic global warming. Such attribution to anthropogenic global warming is, however, uncertain. It can be done based on simulation results with climate models, but it is shown that the climate model results are too uncertain to enable a clear attribution. Water engineering design statistics, such as extreme rainfall IDF or peak or low flow QDF statistics, obviously are influenced by these temporal variations (oscillations, trends). It is shown in the paper, based on the Brussels 10-minutes rainfall data, that rainfall design values may be about 20% biased or different when based on short rainfall series of 10 to 15 years length, and still 8% for series of 25 years lengths. Methods for bias correction are demonstrated. The definition of "bias" depends on a number of factors, which needs further debate in the hydrological and water engineering community. References: Willems P. (2013), 'Multidecadal oscillatory behaviour of rainfall extremes in Europe', Climatic Change, 120(4), 931-944 Willems, P. (2013). 'Adjustment of extreme rainfall statistics accounting for multidecadal climate oscillations', Journal of Hydrology, 490, 126-133 Willems, P., Olsson, J., Arnbjerg-Nielsen, K., Beecham, S., Pathirana, A., Bülow Gregersen, I., Madsen, H., Nguyen, V-T-V. (2012), 'Impacts of climate change on rainfall extremes and urban drainage', IWA Publishing, 252p., Paperback Print ISBN 9781780401256; Ebook ISBN 9781780401263

  8. Hybrid intelligent methodology to design translation invariant morphological operators for Brazilian stock market prediction.

    PubMed

    Araújo, Ricardo de A

    2010-12-01

    This paper presents a hybrid intelligent methodology to design increasing translation invariant morphological operators applied to Brazilian stock market prediction (overcoming the random walk dilemma). The proposed Translation Invariant Morphological Robust Automatic phase-Adjustment (TIMRAA) method consists of a hybrid intelligent model composed of a Modular Morphological Neural Network (MMNN) with a Quantum-Inspired Evolutionary Algorithm (QIEA), which searches for the best time lags to reconstruct the phase space of the time series generator phenomenon and determines the initial (sub-optimal) parameters of the MMNN. Each individual of the QIEA population is further trained by the Back Propagation (BP) algorithm to improve the MMNN parameters supplied by the QIEA. Also, for each prediction model generated, it uses a behavioral statistical test and a phase fix procedure to adjust time phase distortions observed in stock market time series. Furthermore, an experimental analysis is conducted with the proposed method through four Brazilian stock market time series, and the achieved results are discussed and compared to results found with random walk models and the previously introduced Time-delay Added Evolutionary Forecasting (TAEF) and Morphological-Rank-Linear Time-lag Added Evolutionary Forecasting (MRLTAEF) methods. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. Methods used to compute low-flow frequency characteristics for continuous-record streamflow stations in Minnesota, 2006

    USGS Publications Warehouse

    Winterstein, Thomas A.; Arntson, Allan D.; Mitton, Gregory B.

    2007-01-01

    The 1-, 7-, and 30-day low-flow series were determined for 120 continuous-record streamflow stations in Minnesota having at least 20 years of continuous record. The 2-, 5-, 10-, 50-, and 100-year statistics were determined for each series by fitting a log Pearson type III distribution to the data. The methods used to determine the low-flow statistics and to construct the plots of the low-flow frequency curves are described. The low-flow series and the low-flow statistics are presented in tables and graphs.

  10. Investigation on Quantitative Structure Activity Relationships of a Series of Inducible Nitric Oxide.

    PubMed

    Sharma, Mukesh C; Sharma, S

    2016-12-01

    A series of 2-dihydro-4-quinazolin with potent highly selective inhibitors of inducible nitric oxide synthase activities was subjected to quantitative structure activity relationships (QSAR) analysis. Statistically significant equations with high correlation coefficient (r 2  = 0.8219) were developed. The k-nearest neighbor model has showed good cross-validated correlation coefficient and external validation values of 0.7866 and 0.7133, respectively. The selected electrostatic field descriptors the presence of blue ball around R1 and R4 in the quinazolinamine moiety showed electronegative groups favorable for nitric oxide synthase activity. The QSAR models may lead to the structural requirements of inducible nitric oxide compounds and help in the design of new compounds.

  11. Threats to the Internal Validity of Experimental and Quasi-Experimental Research in Healthcare.

    PubMed

    Flannelly, Kevin J; Flannelly, Laura T; Jankowski, Katherine R B

    2018-01-01

    The article defines, describes, and discusses the seven threats to the internal validity of experiments discussed by Donald T. Campbell in his classic 1957 article: history, maturation, testing, instrument decay, statistical regression, selection, and mortality. These concepts are said to be threats to the internal validity of experiments because they pose alternate explanations for the apparent causal relationship between the independent variable and dependent variable of an experiment if they are not adequately controlled. A series of simple diagrams illustrate three pre-experimental designs and three true experimental designs discussed by Campbell in 1957 and several quasi-experimental designs described in his book written with Julian C. Stanley in 1966. The current article explains why each design controls for or fails to control for these seven threats to internal validity.

  12. National Wetland Condition Assessment 2011: A ...

    EPA Pesticide Factsheets

    The National Wetland Condition Assessment 2011: A Collaborative Survey presents the results of an unprecedented assessment of the nation’s wetlands. This report is part of the National Aquatic Resource Surveys, a series of statistically based surveys designed to provide the public and decision makers with nationally consistent and representative information on the condition of all the nation's waters. The National Wetland Condition report provides information on the biological condition of the nation’s wetlands and key stressors that affect them.

  13. Model-Based Design of Long-Distance Tracer Transport Experiments in Plants.

    PubMed

    Bühler, Jonas; von Lieres, Eric; Huber, Gregor J

    2018-01-01

    Studies of long-distance transport of tracer isotopes in plants offer a high potential for functional phenotyping, but so far measurement time is a bottleneck because continuous time series of at least 1 h are required to obtain reliable estimates of transport properties. Hence, usual throughput values are between 0.5 and 1 samples h -1 . Here, we propose to increase sample throughput by introducing temporal gaps in the data acquisition of each plant sample and measuring multiple plants one after each other in a rotating scheme. In contrast to common time series analysis methods, mechanistic tracer transport models allow the analysis of interrupted time series. The uncertainties of the model parameter estimates are used as a measure of how much information was lost compared to complete time series. A case study was set up to systematically investigate different experimental schedules for different throughput scenarios ranging from 1 to 12 samples h -1 . Selected designs with only a small amount of data points were found to be sufficient for an adequate parameter estimation, implying that the presented approach enables a substantial increase of sample throughput. The presented general framework for automated generation and evaluation of experimental schedules allows the determination of a maximal sample throughput and the respective optimal measurement schedule depending on the required statistical reliability of data acquired by future experiments.

  14. Manipulating Environmental Time Series in Python/Numpy: the Scikits.Timeseries Package and its Applications.

    NASA Astrophysics Data System (ADS)

    Gerard-Marchant, P. G.

    2008-12-01

    Numpy is a free, open source C/Python interface designed for the fast and convenient manipulation of multidimensional numerical arrays. The base object, ndarray, can also be easily be extended to define new objects meeting specific needs. Thanks to its simplicity, efficiency and modularity, numpy and its companion library Scipy have become increasingly popular in the scientific community over the last few years, with application ranging from astronomy and engineering to finances and statistics. Its capacity to handle missing values is particularly appealing when analyzing environmental time series, where irregular data sampling might be an issue. After reviewing the main characteristics of numpy objects and the mechanism of subclassing, we will present the scikits.timeseries package, developed to manipulate single- and multi-variable arrays indexed in time. We will illustrate some typical applications of this package by introducing climpy, a set of extensions designed to help analyzing the impacts of climate variability on environmental data such as precipitations or streamflows.

  15. [What is the methodological quality of articles on therapeutic procedures published in Cirugía Española?].

    PubMed

    Manterola, Carlos; Busquets, Juli; Pascual, Marta; Grande, Luis

    2006-02-01

    The aim of this study was to determine the methodological quality of articles on therapeutic procedures published in Cirugía Española and to study its association with the publication year, center, and subject-matter. A bibliometric study that included all articles on therapeutic procedures published in Cirugía Española between 2001 and 2004 was performed. All kinds of clinical designs were considered, excluding editorials, review articles, letters to editor, and experimental studies. The variables analyzed were: year of publication, center, design, and methodological quality. Methodological quality was determined by a valid and reliable scale. Descriptive statistics (calculation of means, standard deviation and medians) and analytical statistics (Pearson's chi2, nonparametric, ANOVA and Bonferroni tests) were used. A total of 244 articles were studied (197 case series [81%], 28 cohort studies [12%], 17 clinical trials [7%], 1 cross sectional study and 1 case-control study [0.8%]). The studies were performed mainly in Catalonia and Murcia (22% and 16%, respectively). The most frequent subject areas were soft tissue and hepatobiliopancreatic surgery (23% and 19%, respectively). The mean and median of the methodological quality score calculated for the entire series was 10.2 +/- 3.9 points and 9.5 points, respectively. Methodological quality significantly increased by publication year (p < 0.001). An association between methodological quality and subject area was observed but no association was detected with the center performing the study. The methodological quality of articles on therapeutic procedures published in Cirugía Española between 2001 and 2004 is low. However, a statistically significant trend toward improvement was observed.

  16. Spectral and cross-spectral analysis of uneven time series with the smoothed Lomb-Scargle periodogram and Monte Carlo evaluation of statistical significance

    NASA Astrophysics Data System (ADS)

    Pardo-Igúzquiza, Eulogio; Rodríguez-Tovar, Francisco J.

    2012-12-01

    Many spectral analysis techniques have been designed assuming sequences taken with a constant sampling interval. However, there are empirical time series in the geosciences (sediment cores, fossil abundance data, isotope analysis, …) that do not follow regular sampling because of missing data, gapped data, random sampling or incomplete sequences, among other reasons. In general, interpolating an uneven series in order to obtain a succession with a constant sampling interval alters the spectral content of the series. In such cases it is preferable to follow an approach that works with the uneven data directly, avoiding the need for an explicit interpolation step. The Lomb-Scargle periodogram is a popular choice in such circumstances, as there are programs available in the public domain for its computation. One new computer program for spectral analysis improves the standard Lomb-Scargle periodogram approach in two ways: (1) It explicitly adjusts the statistical significance to any bias introduced by variance reduction smoothing, and (2) it uses a permutation test to evaluate confidence levels, which is better suited than parametric methods when neighbouring frequencies are highly correlated. Another novel program for cross-spectral analysis offers the advantage of estimating the Lomb-Scargle cross-periodogram of two uneven time series defined on the same interval, and it evaluates the confidence levels of the estimated cross-spectra by a non-parametric computer intensive permutation test. Thus, the cross-spectrum, the squared coherence spectrum, the phase spectrum, and the Monte Carlo statistical significance of the cross-spectrum and the squared-coherence spectrum can be obtained. Both of the programs are written in ANSI Fortran 77, in view of its simplicity and compatibility. The program code is of public domain, provided on the website of the journal (http://www.iamg.org/index.php/publisher/articleview/frmArticleID/112/). Different examples (with simulated and real data) are described in this paper to corroborate the methodology and the implementation of these two new programs.

  17. SWToolbox: A surface-water tool-box for statistical analysis of streamflow time series

    USGS Publications Warehouse

    Kiang, Julie E.; Flynn, Kate; Zhai, Tong; Hummel, Paul; Granato, Gregory

    2018-03-07

    This report is a user guide for the low-flow analysis methods provided with version 1.0 of the Surface Water Toolbox (SWToolbox) computer program. The software combines functionality from two software programs—U.S. Geological Survey (USGS) SWSTAT and U.S. Environmental Protection Agency (EPA) DFLOW. Both of these programs have been used primarily for computation of critical low-flow statistics. The main analysis methods are the computation of hydrologic frequency statistics such as the 7-day minimum flow that occurs on average only once every 10 years (7Q10), computation of design flows including biologically based flows, and computation of flow-duration curves and duration hydrographs. Other annual, monthly, and seasonal statistics can also be computed. The interface facilitates retrieval of streamflow discharge data from the USGS National Water Information System and outputs text reports for a record of the analysis. Tools for graphing data and screening tests are available to assist the analyst in conducting the analysis.

  18. Compounding approach for univariate time series with nonstationary variances

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  19. Compounding approach for univariate time series with nonstationary variances.

    PubMed

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  20. Time-series analysis of the transcriptome and proteome of Escherichia coli upon glucose repression.

    PubMed

    Borirak, Orawan; Rolfe, Matthew D; de Koning, Leo J; Hoefsloot, Huub C J; Bekker, Martijn; Dekker, Henk L; Roseboom, Winfried; Green, Jeffrey; de Koster, Chris G; Hellingwerf, Klaas J

    2015-10-01

    Time-series transcript- and protein-profiles were measured upon initiation of carbon catabolite repression in Escherichia coli, in order to investigate the extent of post-transcriptional control in this prototypical response. A glucose-limited chemostat culture was used as the CCR-free reference condition. Stopping the pump and simultaneously adding a pulse of glucose, that saturated the cells for at least 1h, was used to initiate the glucose response. Samples were collected and subjected to quantitative time-series analysis of both the transcriptome (using microarray analysis) and the proteome (through a combination of 15N-metabolic labeling and mass spectrometry). Changes in the transcriptome and corresponding proteome were analyzed using statistical procedures designed specifically for time-series data. By comparison of the two sets of data, a total of 96 genes were identified that are post-transcriptionally regulated. This gene list provides candidates for future in-depth investigation of the molecular mechanisms involved in post-transcriptional regulation during carbon catabolite repression in E. coli, like the involvement of small RNAs. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  1. To what extent does variability of historical rainfall series influence extreme event statistics of sewer system surcharge and overflows?

    PubMed

    Schaarup-Jensen, K; Rasmussen, M R; Thorndahl, S

    2009-01-01

    In urban drainage modelling long-term extreme statistics has become an important basis for decision-making e.g. in connection with renovation projects. Therefore it is of great importance to minimize the uncertainties with regards to long-term prediction of maximum water levels and combined sewer overflow (CSO) in drainage systems. These uncertainties originate from large uncertainties regarding rainfall inputs, parameters, and assessment of return periods. This paper investigates how the choice of rainfall time series influences the extreme events statistics of max water levels in manholes and CSO volumes. Traditionally, long-term rainfall series, from a local rain gauge, are unavailable. In the present case study, however, long and local rain series are available. 2 rainfall gauges have recorded events for approximately 9 years at 2 locations within the catchment. Beside these 2 gauges another 7 gauges are located at a distance of max 20 kilometers from the catchment. All gauges are included in the Danish national rain gauge system which was launched in 1976. The paper describes to what extent the extreme events statistics based on these 9 series diverge from each other and how this diversity can be handled, e.g. by introducing an "averaging procedure" based on the variability within the set of statistics. All simulations are performed by means of the MOUSE LTS model.

  2. Proceedings of the Conference on the Design of Experiments in Army Research Development and Testing (32nd)

    DTIC Science & Technology

    1987-06-01

    number of series among the 63 which were identified as a particular ARIMA form and were "best" modeled by a particular technique. Figure 1 illustrates a...th time from xe’s. The integrbted autoregressive - moving average model , denoted by ARIMA (p,d,q) is a result of combining d-th differencing process...Experiments, (4) Data Analysis and Modeling , (5) Theory and Probablistic Inference, (6) Fuzzy Statistics, (7) Forecasting and Prediction, (8) Small Sample

  3. Estimating weak ratiometric signals in imaging data. II. Meta-analysis with multiple, dual-channel datasets.

    PubMed

    Sornborger, Andrew; Broder, Josef; Majumder, Anirban; Srinivasamoorthy, Ganesh; Porter, Erika; Reagin, Sean S; Keith, Charles; Lauderdale, James D

    2008-09-01

    Ratiometric fluorescent indicators are used for making quantitative measurements of a variety of physiological variables. Their utility is often limited by noise. This is the second in a series of papers describing statistical methods for denoising ratiometric data with the aim of obtaining improved quantitative estimates of variables of interest. Here, we outline a statistical optimization method that is designed for the analysis of ratiometric imaging data in which multiple measurements have been taken of systems responding to the same stimulation protocol. This method takes advantage of correlated information across multiple datasets for objectively detecting and estimating ratiometric signals. We demonstrate our method by showing results of its application on multiple, ratiometric calcium imaging experiments.

  4. Statistical inference for classification of RRIM clone series using near IR reflectance properties

    NASA Astrophysics Data System (ADS)

    Ismail, Faridatul Aima; Madzhi, Nina Korlina; Hashim, Hadzli; Abdullah, Noor Ezan; Khairuzzaman, Noor Aishah; Azmi, Azrie Faris Mohd; Sampian, Ahmad Faiz Mohd; Harun, Muhammad Hafiz

    2015-08-01

    RRIM clone is a rubber breeding series produced by RRIM (Rubber Research Institute of Malaysia) through "rubber breeding program" to improve latex yield and producing clones attractive to farmers. The objective of this work is to analyse measurement of optical sensing device on latex of selected clone series. The device using transmitting NIR properties and its reflectance is converted in terms of voltage. The obtained reflectance index value via voltage was analyzed using statistical technique in order to find out the discrimination among the clones. From the statistical results using error plots and one-way ANOVA test, there is an overwhelming evidence showing discrimination of RRIM 2002, RRIM 2007 and RRIM 3001 clone series with p value = 0.000. RRIM 2008 cannot be discriminated with RRIM 2014; however both of these groups are distinct from the other clones.

  5. MULTIVARIATE STATISTICAL MODELS FOR EFFECTS OF PM AND COPOLLUTANTS IN A DAILY TIME SERIES EPIDEMIOLOGY STUDY

    EPA Science Inventory

    Most analyses of daily time series epidemiology data relate mortality or morbidity counts to PM and other air pollutants by means of single-outcome regression models using multiple predictors, without taking into account the complex statistical structure of the predictor variable...

  6. Using missing ordinal patterns to detect nonlinearity in time series data.

    PubMed

    Kulp, Christopher W; Zunino, Luciano; Osborne, Thomas; Zawadzki, Brianna

    2017-08-01

    The number of missing ordinal patterns (NMP) is the number of ordinal patterns that do not appear in a series after it has been symbolized using the Bandt and Pompe methodology. In this paper, the NMP is demonstrated as a test for nonlinearity using a surrogate framework in order to see if the NMP for a series is statistically different from the NMP of iterative amplitude adjusted Fourier transform (IAAFT) surrogates. It is found that the NMP works well as a test statistic for nonlinearity, even in the cases of very short time series. Both model and experimental time series are used to demonstrate the efficacy of the NMP as a test for nonlinearity.

  7. On entropy, financial markets and minority games

    NASA Astrophysics Data System (ADS)

    Zapart, Christopher A.

    2009-04-01

    The paper builds upon an earlier statistical analysis of financial time series with Shannon information entropy, published in [L. Molgedey, W. Ebeling, Local order, entropy and predictability of financial time series, European Physical Journal B-Condensed Matter and Complex Systems 15/4 (2000) 733-737]. A novel generic procedure is proposed for making multistep-ahead predictions of time series by building a statistical model of entropy. The approach is first demonstrated on the chaotic Mackey-Glass time series and later applied to Japanese Yen/US dollar intraday currency data. The paper also reinterprets Minority Games [E. Moro, The minority game: An introductory guide, Advances in Condensed Matter and Statistical Physics (2004)] within the context of physical entropy, and uses models derived from minority game theory as a tool for measuring the entropy of a model in response to time series. This entropy conditional upon a model is subsequently used in place of information-theoretic entropy in the proposed multistep prediction algorithm.

  8. Student learning of upper-level thermal and statistical physics: The derivation and use of the Boltzmann factor

    NASA Astrophysics Data System (ADS)

    Thompson, John

    2015-04-01

    As the Physical Review Focused Collection demonstrates, recent frontiers in physics education research include systematic investigations at the upper division. As part of a collaborative project, we have examined student understanding of several topics in upper-division thermal and statistical physics. A fruitful context for research is the Boltzmann factor in statistical mechanics: the standard derivation involves several physically justified mathematical steps as well as the invocation of a Taylor series expansion. We have investigated student understanding of the physical significance of the Boltzmann factor as well as its utility in various circumstances, and identified various lines of student reasoning related to the use of the Boltzmann factor. Results from written data as well as teaching interviews suggest that many students do not use the Boltzmann factor when answering questions related to probability in applicable physical situations, even after lecture instruction. We designed an inquiry-based tutorial activity to guide students through a derivation of the Boltzmann factor and to encourage deep connections between the physical quantities involved and the mathematics. Observations of students working through the tutorial suggest that many students at this level can recognize and interpret Taylor series expansions, but they often lack fluency in creating and using Taylor series appropriately, despite previous exposure in both calculus and physics courses. Our findings also suggest that tutorial participation not only increases the prevalence of relevant invocation of the Boltzmann factor, but also helps students gain an appreciation of the physical implications and meaning of the mathematical formalism behind the formula. Supported in part by NSF Grants DUE-0817282, DUE-0837214, and DUE-1323426.

  9. Statistical analysis of hydrological response in urbanising catchments based on adaptive sampling using inter-amount times

    NASA Astrophysics Data System (ADS)

    ten Veldhuis, Marie-Claire; Schleiss, Marc

    2017-04-01

    In this study, we introduced an alternative approach for analysis of hydrological flow time series, using an adaptive sampling framework based on inter-amount times (IATs). The main difference with conventional flow time series is the rate at which low and high flows are sampled: the unit of analysis for IATs is a fixed flow amount, instead of a fixed time window. We analysed statistical distributions of flows and IATs across a wide range of sampling scales to investigate sensitivity of statistical properties such as quantiles, variance, skewness, scaling parameters and flashiness indicators to the sampling scale. We did this based on streamflow time series for 17 (semi)urbanised basins in North Carolina, US, ranging from 13 km2 to 238 km2 in size. Results showed that adaptive sampling of flow time series based on inter-amounts leads to a more balanced representation of low flow and peak flow values in the statistical distribution. While conventional sampling gives a lot of weight to low flows, as these are most ubiquitous in flow time series, IAT sampling gives relatively more weight to high flow values, when given flow amounts are accumulated in shorter time. As a consequence, IAT sampling gives more information about the tail of the distribution associated with high flows, while conventional sampling gives relatively more information about low flow periods. We will present results of statistical analyses across a range of subdaily to seasonal scales and will highlight some interesting insights that can be derived from IAT statistics with respect to basin flashiness and impact urbanisation on hydrological response.

  10. Methodological considerations, such as directed acyclic graphs, for studying "acute on chronic" disease epidemiology: chronic obstructive pulmonary disease example.

    PubMed

    Tsai, Chu-Lin; Camargo, Carlos A

    2009-09-01

    Acute exacerbations of chronic disease are ubiquitous in clinical medicine, and thus far, there has been a paucity of integrated methodological discussion on this phenomenon. We use acute exacerbations of chronic obstructive pulmonary disease as an example to emphasize key epidemiological and statistical issues for this understudied field in clinical epidemiology. Directed acyclic graphs are a useful epidemiological tool to explain the differential effects of risk factor on health outcomes in studies of acute and chronic phases of disease. To study the pathogenesis of acute exacerbations of chronic disease, case-crossover design and time-series analysis are well-suited study designs to differentiate acute and chronic effect. Modeling changes over time and setting appropriate thresholds are important steps to separate acute from chronic phases of disease in serial measurements. In statistical analysis, acute exacerbations are recurrent events, and some individuals are more prone to recurrences than others. Therefore, appropriate statistical modeling should take into account intraindividual dependence. Finally, we recommend the use of "event-based" number needed to treat (NNT) to prevent a single exacerbation instead of traditional patient-based NNT. Addressing these methodological challenges will advance research quality in acute on chronic disease epidemiology.

  11. qFeature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-09-14

    This package contains statistical routines for extracting features from multivariate time-series data which can then be used for subsequent multivariate statistical analysis to identify patterns and anomalous behavior. It calculates local linear or quadratic regression model fits to moving windows for each series and then summarizes the model coefficients across user-defined time intervals for each series. These methods are domain agnostic-but they have been successfully applied to a variety of domains, including commercial aviation and electric power grid data.

  12. An operational definition of a statistically meaningful trend.

    PubMed

    Bryhn, Andreas C; Dimberg, Peter H

    2011-04-28

    Linear trend analysis of time series is standard procedure in many scientific disciplines. If the number of data is large, a trend may be statistically significant even if data are scattered far from the trend line. This study introduces and tests a quality criterion for time trends referred to as statistical meaningfulness, which is a stricter quality criterion for trends than high statistical significance. The time series is divided into intervals and interval mean values are calculated. Thereafter, r(2) and p values are calculated from regressions concerning time and interval mean values. If r(2) ≥ 0.65 at p ≤ 0.05 in any of these regressions, then the trend is regarded as statistically meaningful. Out of ten investigated time series from different scientific disciplines, five displayed statistically meaningful trends. A Microsoft Excel application (add-in) was developed which can perform statistical meaningfulness tests and which may increase the operationality of the test. The presented method for distinguishing statistically meaningful trends should be reasonably uncomplicated for researchers with basic statistics skills and may thus be useful for determining which trends are worth analysing further, for instance with respect to causal factors. The method can also be used for determining which segments of a time trend may be particularly worthwhile to focus on.

  13. A pragmatic investigation into the effects of massage therapy on the self efficacy of multiple sclerosis clients.

    PubMed

    Finch, Paul; Bessonnette, Susan

    2014-01-01

    This research was conducted to examine changes in self self-efficacy, (the perception/belief that one can competently cope with a challenging situation) in multiple sclerosis clients following a series of massage therapy treatments. This small practical trial investigated the effects of a pragmatic treatment protocol using a prospective randomized pretest posttest waitlist control design. Self-Efficacy scores were obtained before the first treatment, mid-treatment series, after the last treatment in the series, four weeks after the final treatment and again eight weeks after the final treatment had been received. The intervention involved a series of weekly one hour therapeutic massage treatments conducted over eight weeks and a subsequent eight week follow up period. All treatments were delivered by supervised student therapists in the final term of their two year massage therapy program. Self-Efficacy [SE] was the outcome for the study, measured using the Multiple Sclerosis Self-Efficacy survey [MSSE]. Descriptive statistics for SE scores were assessed and inferential analysis involved the testing of between group differences at each of the measurement points noted above. Statistically significant improvement in self-efficacy was noted between treatment (n = 8) and control (n = 7) groups at mid treatment series (t = 2.32; p < 0.02), post treatment series (t = 1.81; p < 0.05) and at four week follow up (t = 2.24; p < 0.02). At the eight week follow up self-efficacy scores had decreased and there was no statistically significant difference between groups (t = 0.87; p < 0.2). Study results support previous findings indicating that massage therapy increases the self-efficacy of clients with multiple sclerosis, potentially resulting in a better overall adjustment to the disease and an improvement in psycho-emotional state. The increase in self-efficacy after 4 weeks of treatment suggests that positive response occurs more rapidly that was previously demonstrated. The improvement in self-efficacy endured 4 weeks after the end of the treatment series, which suggests that massage therapy may have longer term effects on self-efficacy that were not previously noted. Lack of inter group difference at the eight week follow up reinforces the notion that on-going treatment is required in order to maintain the positive changes observed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Significance testing of clinical data using virus dynamics models with a Markov chain Monte Carlo method: application to emergence of lamivudine-resistant hepatitis B virus.

    PubMed Central

    Burroughs, N J; Pillay, D; Mutimer, D

    1999-01-01

    Bayesian analysis using a virus dynamics model is demonstrated to facilitate hypothesis testing of patterns in clinical time-series. Our Markov chain Monte Carlo implementation demonstrates that the viraemia time-series observed in two sets of hepatitis B patients on antiviral (lamivudine) therapy, chronic carriers and liver transplant patients, are significantly different, overcoming clinical trial design differences that question the validity of non-parametric tests. We show that lamivudine-resistant mutants grow faster in transplant patients than in chronic carriers, which probably explains the differences in emergence times and failure rates between these two sets of patients. Incorporation of dynamic models into Bayesian parameter analysis is of general applicability in medical statistics. PMID:10643081

  15. An adaptive state of charge estimation approach for lithium-ion series-connected battery system

    NASA Astrophysics Data System (ADS)

    Peng, Simin; Zhu, Xuelai; Xing, Yinjiao; Shi, Hongbing; Cai, Xu; Pecht, Michael

    2018-07-01

    Due to the incorrect or unknown noise statistics of a battery system and its cell-to-cell variations, state of charge (SOC) estimation of a lithium-ion series-connected battery system is usually inaccurate or even divergent using model-based methods, such as extended Kalman filter (EKF) and unscented Kalman filter (UKF). To resolve this problem, an adaptive unscented Kalman filter (AUKF) based on a noise statistics estimator and a model parameter regulator is developed to accurately estimate the SOC of a series-connected battery system. An equivalent circuit model is first built based on the model parameter regulator that illustrates the influence of cell-to-cell variation on the battery system. A noise statistics estimator is then used to attain adaptively the estimated noise statistics for the AUKF when its prior noise statistics are not accurate or exactly Gaussian. The accuracy and effectiveness of the SOC estimation method is validated by comparing the developed AUKF and UKF when model and measurement statistics noises are inaccurate, respectively. Compared with the UKF and EKF, the developed method shows the highest SOC estimation accuracy.

  16. On statistical inference in time series analysis of the evolution of road safety.

    PubMed

    Commandeur, Jacques J F; Bijleveld, Frits D; Bergel-Hayat, Ruth; Antoniou, Constantinos; Yannis, George; Papadimitriou, Eleonora

    2013-11-01

    Data collected for building a road safety observatory usually include observations made sequentially through time. Examples of such data, called time series data, include annual (or monthly) number of road traffic accidents, traffic fatalities or vehicle kilometers driven in a country, as well as the corresponding values of safety performance indicators (e.g., data on speeding, seat belt use, alcohol use, etc.). Some commonly used statistical techniques imply assumptions that are often violated by the special properties of time series data, namely serial dependency among disturbances associated with the observations. The first objective of this paper is to demonstrate the impact of such violations to the applicability of standard methods of statistical inference, which leads to an under or overestimation of the standard error and consequently may produce erroneous inferences. Moreover, having established the adverse consequences of ignoring serial dependency issues, the paper aims to describe rigorous statistical techniques used to overcome them. In particular, appropriate time series analysis techniques of varying complexity are employed to describe the development over time, relating the accident-occurrences to explanatory factors such as exposure measures or safety performance indicators, and forecasting the development into the near future. Traditional regression models (whether they are linear, generalized linear or nonlinear) are shown not to naturally capture the inherent dependencies in time series data. Dedicated time series analysis techniques, such as the ARMA-type and DRAG approaches are discussed next, followed by structural time series models, which are a subclass of state space methods. The paper concludes with general recommendations and practice guidelines for the use of time series models in road safety research. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. The Use of Computer-Assisted Identification of ARIMA Time-Series.

    ERIC Educational Resources Information Center

    Brown, Roger L.

    This study was conducted to determine the effects of using various levels of tutorial statistical software for the tentative identification of nonseasonal ARIMA models, a statistical technique proposed by Box and Jenkins for the interpretation of time-series data. The Box-Jenkins approach is an iterative process encompassing several stages of…

  18. Selection of Marine Corps Drill Instructors

    DTIC Science & Technology

    1980-03-01

    8 4. ., ey- Construction and Cross-Validation Statistics for Drill Instructor School Performance Success Keys...Race, and School Attrition ........... ............................. ... 15 13. Key- Construction and Cross-Validation Statistics for Drill... constructed form, the Alternation Ranking of Series Drill Instruc- tors. In this form, DIs in a Series are ranked from highest to lowest in terms of their

  19. Analytical methods development for supramolecular design in solar hydrogen production

    NASA Astrophysics Data System (ADS)

    Brown, J. R.; Elvington, M.; Mongelli, M. T.; Zigler, D. F.; Brewer, K. J.

    2006-08-01

    In the investigation of alternative energy sources, specifically, solar hydrogen production from water, the ability to perform experiments with a consistent and reproducible light source is key to meaningful photochemistry. The design, construction, and evaluation of a series of LED array photolysis systems for high throughput photochemistry have been performed. Three array systems of increasing sophistication are evaluated using calorimetric measurements and potassium tris(oxalato)ferrate(II) chemical actinometry and compared with a traditional 1000 W Xe arc lamp source. The results are analyzed using descriptive statistics and analysis of variance (ANOVA). The third generation array is modular, and controllable in design. Furthermore, the third generation array system is shown to be comparable in both precision and photonic output to a 1000 W Xe arc lamp.

  20. Detection of statistical asymmetries in non-stationary sign time series: Analysis of foreign exchange data

    PubMed Central

    Takayasu, Hideki; Takayasu, Misako

    2017-01-01

    We extend the concept of statistical symmetry as the invariance of a probability distribution under transformation to analyze binary sign time series data of price difference from the foreign exchange market. We model segments of the sign time series as Markov sequences and apply a local hypothesis test to evaluate the symmetries of independence and time reversion in different periods of the market. For the test, we derive the probability of a binary Markov process to generate a given set of number of symbol pairs. Using such analysis, we could not only segment the time series according the different behaviors but also characterize the segments in terms of statistical symmetries. As a particular result, we find that the foreign exchange market is essentially time reversible but this symmetry is broken when there is a strong external influence. PMID:28542208

  1. Circuit analysis method for thin-film solar cell modules

    NASA Technical Reports Server (NTRS)

    Burger, D. R.

    1985-01-01

    The design of a thin-film solar cell module is dependent on the probability of occurrence of pinhole shunt defects. Using known or assumed defect density data, dichotomous population statistics can be used to calculate the number of defects expected in a module. Probability theory is then used to assign the defective cells to individual strings in a selected series-parallel circuit design. Iterative numerical calculation is used to calcuate I-V curves using cell test values or assumed defective cell values as inputs. Good and shunted cell I-V curves are added to determine the module output power and I-V curve. Different levels of shunt resistance can be selected to model different defect levels.

  2. Fractal analysis of the short time series in a visibility graph method

    NASA Astrophysics Data System (ADS)

    Li, Ruixue; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Chen, Yingyuan

    2016-05-01

    The aim of this study is to evaluate the performance of the visibility graph (VG) method on short fractal time series. In this paper, the time series of Fractional Brownian motions (fBm), characterized by different Hurst exponent H, are simulated and then mapped into a scale-free visibility graph, of which the degree distributions show the power-law form. The maximum likelihood estimation (MLE) is applied to estimate power-law indexes of degree distribution, and in this progress, the Kolmogorov-Smirnov (KS) statistic is used to test the performance of estimation of power-law index, aiming to avoid the influence of droop head and heavy tail in degree distribution. As a result, we find that the MLE gives an optimal estimation of power-law index when KS statistic reaches its first local minimum. Based on the results from KS statistic, the relationship between the power-law index and the Hurst exponent is reexamined and then amended to meet short time series. Thus, a method combining VG, MLE and KS statistics is proposed to estimate Hurst exponents from short time series. Lastly, this paper also offers an exemplification to verify the effectiveness of the combined method. In addition, the corresponding results show that the VG can provide a reliable estimation of Hurst exponents.

  3. Statistical tests for power-law cross-correlated processes

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H. Eugene

    2011-12-01

    For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρDCCA(T,n), where T is the total length of the time series and n the window size. For ρDCCA(T,n), we numerically calculated the Cauchy inequality -1≤ρDCCA(T,n)≤1. Here we derive -1≤ρDCCA(T,n)≤1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρDCCA within which the cross-correlations become statistically significant. For overlapping windows we numerically determine—and for nonoverlapping windows we derive—that the standard deviation of ρDCCA(T,n) tends with increasing T to 1/T. Using ρDCCA(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series.

  4. Signal detection of adverse events with imperfect confirmation rates in vaccine safety studies using self-controlled case series design.

    PubMed

    Xu, Stanley; Newcomer, Sophia; Nelson, Jennifer; Qian, Lei; McClure, David; Pan, Yi; Zeng, Chan; Glanz, Jason

    2014-05-01

    The Vaccine Safety Datalink project captures electronic health record data including vaccinations and medically attended adverse events on 8.8 million enrollees annually from participating managed care organizations in the United States. While the automated vaccination data are generally of high quality, a presumptive adverse event based on diagnosis codes in automated health care data may not be true (misclassification). Consequently, analyses using automated health care data can generate false positive results, where an association between the vaccine and outcome is incorrectly identified, as well as false negative findings, where a true association or signal is missed. We developed novel conditional Poisson regression models and fixed effects models that accommodate misclassification of adverse event outcome for self-controlled case series design. We conducted simulation studies to evaluate their performance in signal detection in vaccine safety hypotheses generating (screening) studies. We also reanalyzed four previously identified signals in a recent vaccine safety study using the newly proposed models. Our simulation studies demonstrated that (i) outcome misclassification resulted in both false positive and false negative signals in screening studies; (ii) the newly proposed models reduced both the rates of false positive and false negative signals. In reanalyses of four previously identified signals using the novel statistical models, the incidence rate ratio estimates and statistical significances were similar to those using conventional models and including only medical record review confirmed cases. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Statistical process control: A feasibility study of the application of time-series measurement in early neurorehabilitation after acquired brain injury.

    PubMed

    Markovic, Gabriela; Schult, Marie-Louise; Bartfai, Aniko; Elg, Mattias

    2017-01-31

    Progress in early cognitive recovery after acquired brain injury is uneven and unpredictable, and thus the evaluation of rehabilitation is complex. The use of time-series measurements is susceptible to statistical change due to process variation. To evaluate the feasibility of using a time-series method, statistical process control, in early cognitive rehabilitation. Participants were 27 patients with acquired brain injury undergoing interdisciplinary rehabilitation of attention within 4 months post-injury. The outcome measure, the Paced Auditory Serial Addition Test, was analysed using statistical process control. Statistical process control identifies if and when change occurs in the process according to 3 patterns: rapid, steady or stationary performers. The statistical process control method was adjusted, in terms of constructing the baseline and the total number of measurement points, in order to measure a process in change. Statistical process control methodology is feasible for use in early cognitive rehabilitation, since it provides information about change in a process, thus enabling adjustment of the individual treatment response. Together with the results indicating discernible subgroups that respond differently to rehabilitation, statistical process control could be a valid tool in clinical decision-making. This study is a starting-point in understanding the rehabilitation process using a real-time-measurements approach.

  6. Enabling Web-Based Analysis of CUAHSI HIS Hydrologic Data Using R and Web Processing Services

    NASA Astrophysics Data System (ADS)

    Ames, D. P.; Kadlec, J.; Bayles, M.; Seul, M.; Hooper, R. P.; Cummings, B.

    2015-12-01

    The CUAHSI Hydrologic Information System (CUAHSI HIS) provides open access to a large number of hydrological time series observation and modeled data from many parts of the world. Several software tools have been designed to simplify searching and access to the CUAHSI HIS datasets. These software tools include: Desktop client software (HydroDesktop, HydroExcel), developer libraries (WaterML R Package, OWSLib, ulmo), and the new interactive search website, http://data.cuahsi.org. An issue with using the time series data from CUAHSI HIS for further analysis by hydrologists (for example for verification of hydrological and snowpack models) is the large heterogeneity of the time series data. The time series may be regular or irregular, contain missing data, have different time support, and be recorded in different units. R is a widely used computational environment for statistical analysis of time series and spatio-temporal data that can be used to assess fitness and perform scientific analyses on observation data. R includes the ability to record a data analysis in the form of a reusable script. The R script together with the input time series dataset can be shared with other users, making the analysis more reproducible. The major goal of this study is to examine the use of R as a Web Processing Service for transforming time series data from the CUAHSI HIS and sharing the results on the Internet within HydroShare. HydroShare is an online data repository and social network for sharing large hydrological data sets such as time series, raster datasets, and multi-dimensional data. It can be used as a permanent cloud storage space for saving the time series analysis results. We examine the issues associated with running R scripts online: including code validation, saving of outputs, reporting progress, and provenance management. An explicit goal is that the script which is run locally should produce exactly the same results as the script run on the Internet. Our design can be used as a model for other studies that need to run R scripts on the web.

  7. Duality between Time Series and Networks

    PubMed Central

    Campanharo, Andriana S. L. O.; Sirer, M. Irmak; Malmgren, R. Dean; Ramos, Fernando M.; Amaral, Luís A. Nunes.

    2011-01-01

    Studying the interaction between a system's components and the temporal evolution of the system are two common ways to uncover and characterize its internal workings. Recently, several maps from a time series to a network have been proposed with the intent of using network metrics to characterize time series. Although these maps demonstrate that different time series result in networks with distinct topological properties, it remains unclear how these topological properties relate to the original time series. Here, we propose a map from a time series to a network with an approximate inverse operation, making it possible to use network statistics to characterize time series and time series statistics to characterize networks. As a proof of concept, we generate an ensemble of time series ranging from periodic to random and confirm that application of the proposed map retains much of the information encoded in the original time series (or networks) after application of the map (or its inverse). Our results suggest that network analysis can be used to distinguish different dynamic regimes in time series and, perhaps more importantly, time series analysis can provide a powerful set of tools that augment the traditional network analysis toolkit to quantify networks in new and useful ways. PMID:21858093

  8. General Aviation Avionics Statistics : 1976

    DOT National Transportation Integrated Search

    1979-11-01

    This report presents avionics statistics for the 1976 general aviation (GA) aircraft fleet and is the third in a series titled "General Aviation Avionics Statistics." The statistics are presented in a capability group framework which enables one to r...

  9. General Aviation Avionics Statistics : 1978 Data

    DOT National Transportation Integrated Search

    1980-12-01

    The report presents avionics statistics for the 1978 general aviation (GA) aircraft fleet and is the fifth in a series titled "General Aviation Statistics." The statistics are presented in a capability group framework which enables one to relate airb...

  10. General Aviation Avionics Statistics : 1979 Data

    DOT National Transportation Integrated Search

    1981-04-01

    This report presents avionics statistics for the 1979 general aviation (GA) aircraft fleet and is the sixth in a series titled General Aviation Avionics Statistics. The statistics preseneted in a capability group framework which enables one to relate...

  11. Licorice Production and Manufacturing: All-Sorts of Practical Applications for Statistics

    ERIC Educational Resources Information Center

    Watson, Jane; Skalicky, Jane; Fitzallen, Noleine; Wright, Suzie

    2009-01-01

    Among the practical applications of statistics is the collection of data from manufacturing processes. Often collected in the form of a time series, data collected from a series of measurements show the variation in those measurements, such as mass of a product manufactured. Limits are set for quality control and if these are exceeded then a…

  12. Opticians Employed in Health Services; United States--1969. Vital and Health Statistics, Series 14, No. 3.

    ERIC Educational Resources Information Center

    National Center for Health Statistics (DHEW/PHS), Hyattsville, MD.

    First in a series of statistical reports on personnel providing vision and eye care assistance, the report presents data collected by the Bureau of Census (geographic location, age, sex, education, type and place of employment, training, specialties, activities, and time spent at work) concerning opticians actively engaged in their profession…

  13. Common pitfalls in statistical analysis: Understanding the properties of diagnostic tests - Part 1.

    PubMed

    Ranganathan, Priya; Aggarwal, Rakesh

    2018-01-01

    In this article in our series on common pitfalls in statistical analysis, we look at some of the attributes of diagnostic tests (i.e., tests which are used to determine whether an individual does or does not have disease). The next article in this series will focus on further issues related to diagnostic tests.

  14. Statistical properties of Charney-Hasegawa-Mima zonal flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Johan, E-mail: anderson.johan@gmail.com; Botha, G. J. J.

    2015-05-15

    A theoretical interpretation of numerically generated probability density functions (PDFs) of intermittent plasma transport events in unforced zonal flows is provided within the Charney-Hasegawa-Mima (CHM) model. The governing equation is solved numerically with various prescribed density gradients that are designed to produce different configurations of parallel and anti-parallel streams. Long-lasting vortices form whose flow is governed by the zonal streams. It is found that the numerically generated PDFs can be matched with analytical predictions of PDFs based on the instanton method by removing the autocorrelations from the time series. In many instances, the statistics generated by the CHM dynamics relaxesmore » to Gaussian distributions for both the electrostatic and vorticity perturbations, whereas in areas with strong nonlinear interactions it is found that the PDFs are exponentially distributed.« less

  15. On the design of henon and logistic map-based random number generator

    NASA Astrophysics Data System (ADS)

    Magfirawaty; Suryadi, M. T.; Ramli, Kalamullah

    2017-10-01

    The key sequence is one of the main elements in the cryptosystem. True Random Number Generators (TRNG) method is one of the approaches to generating the key sequence. The randomness source of the TRNG divided into three main groups, i.e. electrical noise based, jitter based and chaos based. The chaos based utilizes a non-linear dynamic system (continuous time or discrete time) as an entropy source. In this study, a new design of TRNG based on discrete time chaotic system is proposed, which is then simulated in LabVIEW. The principle of the design consists of combining 2D and 1D chaotic systems. A mathematical model is implemented for numerical simulations. We used comparator process as a harvester method to obtain the series of random bits. Without any post processing, the proposed design generated random bit sequence with high entropy value and passed all NIST 800.22 statistical tests.

  16. Did You Know? Video Series - SEER Cancer Statistics

    Cancer.gov

    Videos that explain cancer statistics. Choose from topics including survival, statistics overview, survivorship, disparities, and specific cancer types including breast, lung, colorectal, prostate, melanoma of the skin, and others.

  17. SSSFD manipulator engineering using statistical experiment design techniques

    NASA Technical Reports Server (NTRS)

    Barnes, John

    1991-01-01

    The Satellite Servicer System Flight Demonstration (SSSFD) program is a series of Shuttle flights designed to verify major on-orbit satellite servicing capabilities, such as rendezvous and docking of free flyers, Orbital Replacement Unit (ORU) exchange, and fluid transfer. A major part of this system is the manipulator system that will perform the ORU exchange. The manipulator must possess adequate toolplate dexterity to maneuver a variety of EVA-type tools into position to interface with ORU fasteners, connectors, latches, and handles on the satellite, and to move workpieces and ORUs through 6 degree of freedom (dof) space from the Target Vehicle (TV) to the Support Module (SM) and back. Two cost efficient tools were combined to perform a study of robot manipulator design parameters. These tools are graphical computer simulations and Taguchi Design of Experiment methods. Using a graphics platform, an off-the-shelf robot simulation software package, and an experiment designed with Taguchi's approach, the sensitivities of various manipulator kinematic design parameters to performance characteristics are determined with minimal cost.

  18. Quantitative evaluation of cross correlation between two finite-length time series with applications to single-molecule FRET.

    PubMed

    Hanson, Jeffery A; Yang, Haw

    2008-11-06

    The statistical properties of the cross correlation between two time series has been studied. An analytical expression for the cross correlation function's variance has been derived. On the basis of these results, a statistically robust method has been proposed to detect the existence and determine the direction of cross correlation between two time series. The proposed method has been characterized by computer simulations. Applications to single-molecule fluorescence spectroscopy are discussed. The results may also find immediate applications in fluorescence correlation spectroscopy (FCS) and its variants.

  19. The impact of raising the minimum drinking age on driver fatalities.

    PubMed

    MacKinnon, D P; Woodward, J A

    1986-12-01

    Time series analysis was used to obtain statistical tests of the impact of raising the drinking age on monthly driver fatalities in Illinois, Michigan, and Massachusetts. A control series design permitted comparison between younger drivers (21 or less years) and older drivers (25 and older) within states where the minimum drinking age was raised. Since the two groups share the same driving conditions, it was important to demonstrate that any reduction in fatalities was limited to the young age group within which the drinking age change occurred. In addition, control states were selected to permit a comparison between driver fatalities of the young age group (21 or less) in states with the law change and young drivers in states without the law change. Significant immediate reductions in fatalities among 21 and younger drivers in Illinois and Michigan were observed after these states raised their minimum drinking age. No significant reductions in any control series were observed. A linear decrease in young driver fatalities was observed after the drinking age was raised in Massachusetts. There was also a significant linear decrease in young driver fatalities in the Connecticut control series, perhaps due to increasing awareness among young drivers of the dangers of drinking and driving.

  20. Joint modelling of annual maximum drought severity and corresponding duration

    NASA Astrophysics Data System (ADS)

    Tosunoglu, Fatih; Kisi, Ozgur

    2016-12-01

    In recent years, the joint distribution properties of drought characteristics (e.g. severity, duration and intensity) have been widely evaluated using copulas. However, history of copulas in modelling drought characteristics obtained from streamflow data is still short, especially in semi-arid regions, such as Turkey. In this study, unlike previous studies, drought events are characterized by annual maximum severity (AMS) and corresponding duration (CD) which are extracted from daily streamflow of the seven gauge stations located in Çoruh Basin, Turkey. On evaluation of the various univariate distributions, the Exponential, Weibull and Logistic distributions are identified as marginal distributions for the AMS and CD series. Archimedean copulas, namely Ali-Mikhail-Haq, Clayton, Frank and Gumbel-Hougaard, are then employed to model joint distribution of the AMS and CD series. With respect to the Anderson Darling and Cramér-von Mises statistical tests and the tail dependence assessment, Gumbel-Hougaard copula is identified as the most suitable model for joint modelling of the AMS and CD series at each station. Furthermore, the developed Gumbel-Hougaard copulas are used to derive the conditional and joint return periods of the AMS and CD series which can be useful for designing and management of reservoirs in the basin.

  1. PopSc: Computing Toolkit for Basic Statistics of Molecular Population Genetics Simultaneously Implemented in Web-Based Calculator, Python and R

    PubMed Central

    Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia

    2016-01-01

    Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis. PMID:27792763

  2. PopSc: Computing Toolkit for Basic Statistics of Molecular Population Genetics Simultaneously Implemented in Web-Based Calculator, Python and R.

    PubMed

    Chen, Shi-Yi; Deng, Feilong; Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia

    2016-01-01

    Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis.

  3. Time series modeling of human operator dynamics in manual control tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.

  4. Time Series Modeling of Human Operator Dynamics in Manual Control Tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.

  5. a Method of Time-Series Change Detection Using Full Polsar Images from Different Sensors

    NASA Astrophysics Data System (ADS)

    Liu, W.; Yang, J.; Zhao, J.; Shi, H.; Yang, L.

    2018-04-01

    Most of the existing change detection methods using full polarimetric synthetic aperture radar (PolSAR) are limited to detecting change between two points in time. In this paper, a novel method was proposed to detect the change based on time-series data from different sensors. Firstly, the overall difference image of a time-series PolSAR was calculated by ominous statistic test. Secondly, difference images between any two images in different times ware acquired by Rj statistic test. Generalized Gaussian mixture model (GGMM) was used to obtain time-series change detection maps in the last step for the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection by using the time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can detect the time-series change from different sensors.

  6. National Lakes Assessment 2012: A Collaborative Survey of ...

    EPA Pesticide Factsheets

    The National Lakes Assessment 2012: A Collaborative Survey of Lakes in the United States presents the results of a second evaluation of the lakes in the United States. This report is part of the National Aquatic Resource Surveys, a series of statistically based surveys designed to provide the public and decision makers with nationally consistent and representative information on the condition of all the nation's waters. The NLA 2012 report provides information on the biological, chemical, physical and recreational condition of lakes, key stressors, and how the condition of lakes has changed since 2007.

  7. Applications of geostatistics and Markov models for logo recognition

    NASA Astrophysics Data System (ADS)

    Pham, Tuan

    2003-01-01

    Spatial covariances based on geostatistics are extracted as representative features of logo or trademark images. These spatial covariances are different from other statistical features for image analysis in that the structural information of an image is independent of the pixel locations and represented in terms of spatial series. We then design a classifier in the sense of hidden Markov models to make use of these geostatistical sequential data to recognize the logos. High recognition rates are obtained from testing the method against a public-domain logo database.

  8. Applications of statistics to medical science (1) Fundamental concepts.

    PubMed

    Watanabe, Hiroshi

    2011-01-01

    The conceptual framework of statistical tests and statistical inferences are discussed, and the epidemiological background of statistics is briefly reviewed. This study is one of a series in which we survey the basics of statistics and practical methods used in medical statistics. Arguments related to actual statistical analysis procedures will be made in subsequent papers.

  9. NASA historical data book. Volume 4: NASA resources 1969-1978

    NASA Technical Reports Server (NTRS)

    Gawdiak, Ihor Y.; Fedor, Helen

    1994-01-01

    This is Volume 4, NASA Resources 1969-1978, of a series providing a 20-year statistical summary of NASA programs. This series is an important component of NASA published historical reference works, used by NASA personnel, managers, external researchers, and other government agencies. This volume combines statistical data of the component facilities with the data of the parent installation.

  10. Statistical power in parallel group point exposure studies with time-to-event outcomes: an empirical comparison of the performance of randomized controlled trials and the inverse probability of treatment weighting (IPTW) approach.

    PubMed

    Austin, Peter C; Schuster, Tibor; Platt, Robert W

    2015-10-15

    Estimating statistical power is an important component of the design of both randomized controlled trials (RCTs) and observational studies. Methods for estimating statistical power in RCTs have been well described and can be implemented simply. In observational studies, statistical methods must be used to remove the effects of confounding that can occur due to non-random treatment assignment. Inverse probability of treatment weighting (IPTW) using the propensity score is an attractive method for estimating the effects of treatment using observational data. However, sample size and power calculations have not been adequately described for these methods. We used an extensive series of Monte Carlo simulations to compare the statistical power of an IPTW analysis of an observational study with time-to-event outcomes with that of an analysis of a similarly-structured RCT. We examined the impact of four factors on the statistical power function: number of observed events, prevalence of treatment, the marginal hazard ratio, and the strength of the treatment-selection process. We found that, on average, an IPTW analysis had lower statistical power compared to an analysis of a similarly-structured RCT. The difference in statistical power increased as the magnitude of the treatment-selection model increased. The statistical power of an IPTW analysis tended to be lower than the statistical power of a similarly-structured RCT.

  11. Use of interrupted time series analysis in evaluating health care quality improvements.

    PubMed

    Penfold, Robert B; Zhang, Fang

    2013-01-01

    Interrupted time series (ITS) analysis is arguably the strongest quasi-experimental research design. ITS is particularly useful when a randomized trial is infeasible or unethical. The approach usually involves constructing a time series of population-level rates for a particular quality improvement focus (eg, rates of attention-deficit/hyperactivity disorder [ADHD] medication initiation) and testing statistically for a change in the outcome rate in the time periods before and time periods after implementation of a policy/program designed to change the outcome. In parallel, investigators often analyze rates of negative outcomes that might be (unintentionally) affected by the policy/program. We discuss why ITS is a useful tool for quality improvement. Strengths of ITS include the ability to control for secular trends in the data (unlike a 2-period before-and-after t test), ability to evaluate outcomes using population-level data, clear graphical presentation of results, ease of conducting stratified analyses, and ability to evaluate both intended and unintended consequences of interventions. Limitations of ITS include the need for a minimum of 8 time periods before and 8 after an intervention to evaluate changes statistically, difficulty in analyzing the independent impact of separate components of a program that are implemented close together in time, and existence of a suitable control population. Investigators must also be careful not to make individual-level inferences when population-level rates are used to evaluate interventions (though ITS can be used with individual-level data). A brief description of ITS is provided, including a fully implemented (but hypothetical) study of the impact of a program to reduce ADHD medication initiation in children younger than 5 years old and insured by Medicaid in Washington State. An example of the database needed to conduct an ITS is provided, as well as SAS code to implement a difference-in-differences model using preschool-age children in California as a comparison group. Copyright © 2013 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  12. Quantifying memory in complex physiological time-series.

    PubMed

    Shirazi, Amir H; Raoufy, Mohammad R; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R; Amodio, Piero; Jafari, G Reza; Montagnese, Sara; Mani, Ali R

    2013-01-01

    In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of "memory length" was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are 'forgotten' quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations.

  13. Quantifying Memory in Complex Physiological Time-Series

    PubMed Central

    Shirazi, Amir H.; Raoufy, Mohammad R.; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R.; Amodio, Piero; Jafari, G. Reza; Montagnese, Sara; Mani, Ali R.

    2013-01-01

    In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of “memory length” was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are ‘forgotten’ quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations. PMID:24039811

  14. State transportation statistics 2003

    DOT National Transportation Integrated Search

    2003-12-01

    The Bureau of Transportation Statistics (BTS) presents a statistical : profile of transportation in the 50 states and the District of Columbia. : This document supplements a previously published series of individual : state profiles. Like the individ...

  15. An assessment of the validity and discrimination of the intensive time-series design by monitoring learning differences between students with different cognitive tendencies

    NASA Astrophysics Data System (ADS)

    Farnsworth, Carolyn H.; Mayer, Victor J.

    Intensive time-series designs for classroom investigations have been under development since 1975. Studies have been conducted to determine their feasibility (Mayer & Lewis, 1979), their potential for monitoring knowledge acquisition (Mayer & Kozlow, 1980), and the potential threat to validity of the frequency of testing inherent in the design (Mayer & Rojas, 1982). This study, an extension of those previous studies, is an attempt to determine the degree of discrimination the design allows in collecting data on achievement. It also serves as a replication of the Mayer and Kozlow study, an attempt to determine design validity for collecting achievement data. The investigator used her eighth-grade earth science students, from a suburban Columbus (Ohio) junior high school. A multiple-group single intervention time-series design (Glass, Willson, & Gottman, 1975) was adapted to the collection of daily data on achievement in the topic of the intervention, a unit on plate tectonics. Single multiple-choice items were randomly assigned to each of three groups of students, identified on the basis of their ranking on a written test of cognitive level (Lawson, 1978). The top third, or those with formal cognitive tendencies, were compared on the basis of knowledge achievement and understanding achievement with the lowest third of the students, or those with concrete cognitive tendencies, to determine if the data collected in the design would discriminate between the two groups. Several studies (Goodstein & Howe, 1978; Lawson & Renner, 1975) indicated that students with formal cognitive tendencies should learn a formal concept such as plate tectonics with greater understanding than should students with concrete cognitive tendencies. Analyses used were a comparison of regression lines in each of the three study stages: baseline, intervention, and follow-up; t-tests of means of days summed across each stage; and a time-series analysis program. Statistically significant differences were found between the two groups both in slopes of regression lines (0.0001) and in t-tests (0.0005) on both knowledge and understanding levels of learning. These differences confirm the discrimination of the intensive time-series design in showing that it can distinguish differences in learning between students with formal cognitive tendencies and those with concrete cognitive tendencies. The time-series analysis model with a trend in the intervention was better than a model with no trend for both groups of students, in that it accounted for a greater amount of variance in the data from both knowledge and understanding levels of learning. This finding adds additional confidence in the validity of the design for obtaining achievement data. When the analysis model with trend was used on data from the group with formal cognitive tendencies, it accounted for a greater degree of variance than the same model applied to the data from the group with concrete cognitive tendencies. This more conservative analysis, therefor, gave results consistent with those from the more usual linear regression techniques and t-tests, further adding to the confidence in the discrimination of the design.

  16. Homogenising time series: Beliefs, dogmas and facts

    NASA Astrophysics Data System (ADS)

    Domonkos, P.

    2010-09-01

    For obtaining reliable information about climate change and climate variability the use of high quality data series is essentially important, and one basic tool of quality improvements is the statistical homogenisation of observed time series. In the recent decades large number of homogenisation methods has been developed, but the real effects of their application on time series are still not known entirely. The ongoing COST HOME project (COST ES0601) is devoted to reveal the real impacts of homogenisation methods more detailed and with higher confidence than earlier. As part of the COST activity, a benchmark dataset was built whose characteristics approach well the characteristics of real networks of observed time series. This dataset offers much better opportunity than ever to test the wide variety of homogenisation methods, and analyse the real effects of selected theoretical recommendations. The author believes that several old theoretical rules have to be re-evaluated. Some examples of the hot questions, a) Statistically detected change-points can be accepted only with the confirmation of metadata information? b) Do semi-hierarchic algorithms for detecting multiple change-points in time series function effectively in practise? c) Is it good to limit the spatial comparison of candidate series with up to five other series in the neighbourhood? Empirical results - those from the COST benchmark, and other experiments too - show that real observed time series usually include several inhomogeneities of different sizes. Small inhomogeneities seem like part of the climatic variability, thus the pure application of classic theory that change-points of observed time series can be found and corrected one-by-one is impossible. However, after homogenisation the linear trends, seasonal changes and long-term fluctuations of time series are usually much closer to the reality, than in raw time series. The developers and users of homogenisation methods have to bear in mind that the eventual purpose of homogenisation is not to find change-points, but to have the observed time series with statistical properties those characterise well the climate change and climate variability.

  17. Change-point analysis of geophysical time-series: application to landslide displacement rate (Séchilienne rock avalanche, France)

    NASA Astrophysics Data System (ADS)

    Amorese, D.; Grasso, J.-R.; Garambois, S.; Font, M.

    2018-05-01

    The rank-sum multiple change-point method is a robust statistical procedure designed to search for the optimal number and the location of change points in an arbitrary continue or discrete sequence of values. As such, this procedure can be used to analyse time-series data. Twelve years of robust data sets for the Séchilienne (French Alps) rockslide show a continuous increase in average displacement rate from 50 to 280 mm per month, in the 2004-2014 period, followed by a strong decrease back to 50 mm per month in the 2014-2015 period. When possible kinematic phases are tentatively suggested in previous studies, its solely rely on the basis of empirical threshold values. In this paper, we analyse how the use of a statistical algorithm for change-point detection helps to better understand time phases in landslide kinematics. First, we test the efficiency of the statistical algorithm on geophysical benchmark data, these data sets (stream flows and Northern Hemisphere temperatures) being already analysed by independent statistical tools. Second, we apply the method to 12-yr daily time-series of the Séchilienne landslide, for rainfall and displacement data, from 2003 December to 2015 December, in order to quantitatively extract changes in landslide kinematics. We find two strong significant discontinuities in the weekly cumulated rainfall values: an average rainfall rate increase is resolved in 2012 April and a decrease in 2014 August. Four robust changes are highlighted in the displacement time-series (2008 May, 2009 November-December-2010 January, 2012 September and 2014 March), the 2010 one being preceded by a significant but weak rainfall rate increase (in 2009 November). Accordingly, we are able to quantitatively define five kinematic stages for the Séchilienne rock avalanche during this period. The synchronization between the rainfall and displacement rate, only resolved at the end of 2009 and beginning of 2010, corresponds to a remarkable change (fourfold increase in mean displacement rate) in the landslide kinematic. This suggests that an increase of the rainfall is able to drive an increase of the landslide displacement rate, but that most of the kinematics of the landslide is not directly attributable to rainfall amount. The detailed exploration of the characteristics of the five kinematic stages suggests that the weekly averaged displacement rates are more tied to the frequency or rainy days than to the rainfall rate values. These results suggest the pattern of Séchilienne rock avalanche is consistent with the previous findings that landslide kinematics is dependent upon not only rainfall but also soil moisture conditions (as known as being more strongly related to precipitation frequency than to precipitation amount). Finally, our analysis of the displacement rate time-series pinpoints a susceptibility change of slope response to rainfall, as being slower before the end of 2009 than after, respectively. The kinematic history as depicted by statistical tools opens new routes to understand the apparent complexity of Séchilienne landslide kinematic.

  18. The sequentially discounting autoregressive (SDAR) method for on-line automatic seismic event detecting on long term observation

    NASA Astrophysics Data System (ADS)

    Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.

    2017-12-01

    In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long-term series and achieve real-time monitoring. Finally, we employ the SDAR on a synthetic model and Tomakomai Ocean Bottom Cable (OBC) baseline data to prove the feasibility and advantage of our method.

  19. Statistical Analysis of a Large Sample Size Pyroshock Test Data Set Including Post Flight Data Assessment. Revision 1

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; McNelis, Anne M.

    2010-01-01

    The Earth Observing System (EOS) Terra spacecraft was launched on an Atlas IIAS launch vehicle on its mission to observe planet Earth in late 1999. Prior to launch, the new design of the spacecraft's pyroshock separation system was characterized by a series of 13 separation ground tests. The analysis methods used to evaluate this unusually large amount of shock data will be discussed in this paper, with particular emphasis on population distributions and finding statistically significant families of data, leading to an overall shock separation interface level. The wealth of ground test data also allowed a derivation of a Mission Assurance level for the flight. All of the flight shock measurements were below the EOS Terra Mission Assurance level thus contributing to the overall success of the EOS Terra mission. The effectiveness of the statistical methodology for characterizing the shock interface level and for developing a flight Mission Assurance level from a large sample size of shock data is demonstrated in this paper.

  20. A Primer on Receiver Operating Characteristic Analysis and Diagnostic Efficiency Statistics for Pediatric Psychology: We Are Ready to ROC

    PubMed Central

    2014-01-01

    Objective To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Method Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Results Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. Conclusions This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses. PMID:23965298

  1. A primer on receiver operating characteristic analysis and diagnostic efficiency statistics for pediatric psychology: we are ready to ROC.

    PubMed

    Youngstrom, Eric A

    2014-03-01

    To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses.

  2. Statistical test for ΔρDCCA cross-correlation coefficient

    NASA Astrophysics Data System (ADS)

    Guedes, E. F.; Brito, A. A.; Oliveira Filho, F. M.; Fernandez, B. F.; de Castro, A. P. N.; da Silva Filho, A. M.; Zebende, G. F.

    2018-07-01

    In this paper we propose a new statistical test for ΔρDCCA, Detrended Cross-Correlation Coefficient Difference, a tool to measure contagion/interdependence effect in time series of size N at different time scale n. For this proposition we analyzed simulated and real time series. The results showed that the statistical significance of ΔρDCCA depends on the size N and the time scale n, and we can define a critical value for this dependency in 90%, 95%, and 99% of confidence level, as will be shown in this paper.

  3. An Unsupervised Change Detection Method Using Time-Series of PolSAR Images from Radarsat-2 and GaoFen-3.

    PubMed

    Liu, Wensong; Yang, Jie; Zhao, Jinqi; Shi, Hongtao; Yang, Le

    2018-02-12

    The traditional unsupervised change detection methods based on the pixel level can only detect the changes between two different times with same sensor, and the results are easily affected by speckle noise. In this paper, a novel method is proposed to detect change based on time-series data from different sensors. Firstly, the overall difference image of the time-series PolSAR is calculated by omnibus test statistics, and difference images between any two images in different times are acquired by R j test statistics. Secondly, the difference images are segmented with a Generalized Statistical Region Merging (GSRM) algorithm which can suppress the effect of speckle noise. Generalized Gaussian Mixture Model (GGMM) is then used to obtain the time-series change detection maps in the final step of the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection using time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can not only detect the time-series change from different sensors, but it can also better suppress the influence of speckle noise and improve the overall accuracy and Kappa coefficient.

  4. Highway Statistics 1970

    DOT National Transportation Integrated Search

    1971-10-01

    This publication was prepared by the Highway Statistics Division, Office of Highway Planning, Federal Highway Administration. The 26th of an annual series, it presents the 1970 statistical and analytical tables of general interest on motor fuel, moto...

  5. Back to basics: an introduction to statistics.

    PubMed

    Halfens, R J G; Meijers, J M M

    2013-05-01

    In the second in the series, Professor Ruud Halfens and Dr Judith Meijers give an overview of statistics, both descriptive and inferential. They describe the first principles of statistics, including some relevant inferential tests.

  6. Mixed-Methods Research in Nutrition and Dietetics.

    PubMed

    Zoellner, Jamie; Harris, Jeffrey E

    2017-05-01

    This work focuses on mixed-methods research (MMR) and is the 11th in a series exploring the importance of research design, statistical analysis, and epidemiologic methods as applied to nutrition and dietetics research. MMR research is an investigative technique that applies both quantitative and qualitative data. The purpose of this article is to define MMR; describe its history and nature; provide reasons for its use; describe and explain the six different MMR designs; describe sample selection; and provide guidance in data collection, analysis, and inference. MMR concepts are applied and integrated with nutrition-related scenarios in real-world research contexts and summary recommendations are provided. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  7. Gas-turbine critical research and advanced technology support project

    NASA Technical Reports Server (NTRS)

    Clark, J. S.; Hodge, P. E.; Lowell, C. E.; Anderson, D. N.; Schultz, D. F.

    1981-01-01

    A technology data base for utility gas turbine systems capable of burning coal derived fuels was developed. The following areas are investigated: combustion; materials; and system studies. A two stage test rig is designed to study the conversion of fuel bound nitrogen to NOx. The feasibility of using heavy fuels in catalytic combustors is evaluated. A statistically designed series of hot corrosion burner rig tests was conducted to measure the corrosion rates of typical gas turbine alloys with several fuel contaminants. Fuel additives and several advanced thermal barrier coatings are tested. Thermal barrier coatings used in conjunction with low critical alloys and those used in a combined cycle system in which the stack temperature was maintained above the acid corrosion temperature are also studied.

  8. NOAA Atlas 14: Updated Precipitation Frequency Estimates for the United States

    NASA Astrophysics Data System (ADS)

    Pavlovic, S.; Perica, S.; Martin, D.; Roy, I.; StLaurent, M.; Trypaluk, C.; Unruh, D.; Yekta, M.; Bonnin, G. M.

    2013-12-01

    NOAA Atlas 14 precipitation frequency estimates, developed by the National Weather Service's Hydrometeorological Design Studies Center, serve as the de-facto standards for a wide variety of design and planning activities under federal, state, and local regulations. Precipitation frequency estimates are used in the design of drainage for highways, culverts, bridges, parking lots, as well as in sizing sewer and stormwater infrastructure. Water resources engineers use them to estimate the amount of runoff, to estimate the volume of detention basins and size detention-basin outlet structures, and to estimate the volume of sediment or the amount of erosion. They are also used by floodplain managers to delineate floodplains and regulate the development in floodplains, which is crucial for all communities in the National Flood Insurance Program. Hydrometeorological Design Studies Center now provides more than 35,000 downloads per month to its Precipitation Frequency Data Server. Precipitation frequency estimates are often used in engineering design without any understanding how these estimates have been developed or without any understanding of the uncertainties associated with these estimates. This presentation will describe novel tools and techniques that have being developed in the last years to determine precipitation frequency estimates in NOAA Atlas 14. Particular attention will be given to the regional frequency analysis approach based on L-moment statistics calculated from annual maximum series, selected statistics obtained in determining and parameterizing the probability distribution functions, and the potential implication for engineering design of recently published estimates.

  9. NOAA Atlas 14: Updated Precipitation Frequency Estimates for the United States

    NASA Astrophysics Data System (ADS)

    Pavlovic, S.; Perica, S.; Martin, D.; Roy, I.; StLaurent, M.; Trypaluk, C.; Unruh, D.; Yekta, M.; Bonnin, G. M.

    2011-12-01

    NOAA Atlas 14 precipitation frequency estimates, developed by the National Weather Service's Hydrometeorological Design Studies Center, serve as the de-facto standards for a wide variety of design and planning activities under federal, state, and local regulations. Precipitation frequency estimates are used in the design of drainage for highways, culverts, bridges, parking lots, as well as in sizing sewer and stormwater infrastructure. Water resources engineers use them to estimate the amount of runoff, to estimate the volume of detention basins and size detention-basin outlet structures, and to estimate the volume of sediment or the amount of erosion. They are also used by floodplain managers to delineate floodplains and regulate the development in floodplains, which is crucial for all communities in the National Flood Insurance Program. Hydrometeorological Design Studies Center now provides more than 35,000 downloads per month to its Precipitation Frequency Data Server. Precipitation frequency estimates are often used in engineering design without any understanding how these estimates have been developed or without any understanding of the uncertainties associated with these estimates. This presentation will describe novel tools and techniques that have being developed in the last years to determine precipitation frequency estimates in NOAA Atlas 14. Particular attention will be given to the regional frequency analysis approach based on L-moment statistics calculated from annual maximum series, selected statistics obtained in determining and parameterizing the probability distribution functions, and the potential implication for engineering design of recently published estimates.

  10. Statistical modeling of isoform splicing dynamics from RNA-seq time series data.

    PubMed

    Huang, Yuanhua; Sanguinetti, Guido

    2016-10-01

    Isoform quantification is an important goal of RNA-seq experiments, yet it remains problematic for genes with low expression or several isoforms. These difficulties may in principle be ameliorated by exploiting correlated experimental designs, such as time series or dosage response experiments. Time series RNA-seq experiments, in particular, are becoming increasingly popular, yet there are no methods that explicitly leverage the experimental design to improve isoform quantification. Here, we present DICEseq, the first isoform quantification method tailored to correlated RNA-seq experiments. DICEseq explicitly models the correlations between different RNA-seq experiments to aid the quantification of isoforms across experiments. Numerical experiments on simulated datasets show that DICEseq yields more accurate results than state-of-the-art methods, an advantage that can become considerable at low coverage levels. On real datasets, our results show that DICEseq provides substantially more reproducible and robust quantifications, increasing the correlation of estimates from replicate datasets by up to 10% on genes with low or moderate expression levels (bottom third of all genes). Furthermore, DICEseq permits to quantify the trade-off between temporal sampling of RNA and depth of sequencing, frequently an important choice when planning experiments. Our results have strong implications for the design of RNA-seq experiments, and offer a novel tool for improved analysis of such datasets. Python code is freely available at http://diceseq.sf.net G.Sanguinetti@ed.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Quasi-experimental study designs series-paper 1: introduction: two historical lineages.

    PubMed

    Bärnighausen, Till; Røttingen, John-Arne; Rockers, Peter; Shemilt, Ian; Tugwell, Peter

    2017-09-01

    The objective of this study was to contrast the historical development of experiments and quasi-experiments and provide the motivation for a journal series on quasi-experimental designs in health research. A short historical narrative, with concrete examples, and arguments based on an understanding of the practice of health research and evidence synthesis. Health research has played a key role in developing today's gold standard for causal inference-the randomized controlled multiply blinded trial. Historically, allocation approaches developed from convenience and purposive allocation to alternate and, finally, to random allocation. This development was motivated both by concerns for manipulation in allocation as well as statistical and theoretical developments demonstrating the power of randomization in creating counterfactuals for causal inference. In contrast to the sequential development of experiments, quasi-experiments originated at very different points in time, from very different scientific perspectives, and with frequent and long interruptions in their methodological development. Health researchers have only recently started to recognize the value of quasi-experiments for generating novel insights on causal relationships. While quasi-experiments are unlikely to replace experiments in generating the efficacy and safety evidence required for clinical guidelines and regulatory approval of medical technologies, quasi-experiments can play an important role in establishing the effectiveness of health care practice, programs, and policies. The papers in this series describe and discuss a range of important issues in utilizing quasi-experimental designs for primary research and quasi-experimental results for evidence synthesis. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Common pitfalls in statistical analysis: Clinical versus statistical significance

    PubMed Central

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In clinical research, study results, which are statistically significant are often interpreted as being clinically important. While statistical significance indicates the reliability of the study results, clinical significance reflects its impact on clinical practice. The third article in this series exploring pitfalls in statistical analysis clarifies the importance of differentiating between statistical significance and clinical significance. PMID:26229754

  13. The effectiveness of therapeutic assessment with an adult client: a single-case study using a time-series design.

    PubMed

    Aschieri, Filippo; Smith, Justin D

    2012-01-01

    This article presents the therapeutic assessment (TA; Finn, 2007) of a traumatized young woman named Claire. Claire reported feeling debilitated by academic demands and the expectations of her parents, and was finding it nearly impossible to progress in her studies. She was also finding it difficult to develop and sustain intimate relationships. The emotional aspects of close relationships were extremely difficult for her and she routinely blamed herself for her struggles in this arena. The assessor utilized the TA model for adults, with the exception of not including an optional intervention session. The steps of TA, particularly the extended inquiry and the discussion of test findings along the way, cultivated a supportive and empathic atmosphere with Claire. By employing the single-case time-series experimental design used in previous TA studies (e.g., Smith, Handler, & Nash, 2010; Smith, Wolf, Handler, & Nash, 2009), the authors demonstrated that Claire experienced statistically significant improvement correlated with the onset of TA. Results indicated that participation in TA coincided with a positive shift in the trajectory of her reported symptoms and with recognizing the affection she held for others in her life. This case illustrates the successful application of case-based time-series methodology in the evaluation of an adult TA. The potential implications for future study are discussed.

  14. SoccerStories: a kick-off for visual soccer analysis.

    PubMed

    Perin, Charles; Vuillemot, Romain; Fekete, Jean-Daniel

    2013-12-01

    This article presents SoccerStories, a visualization interface to support analysts in exploring soccer data and communicating interesting insights. Currently, most analyses on such data relate to statistics on individual players or teams. However, soccer analysts we collaborated with consider that quantitative analysis alone does not convey the right picture of the game, as context, player positions and phases of player actions are the most relevant aspects. We designed SoccerStories to support the current practice of soccer analysts and to enrich it, both in the analysis and communication stages. Our system provides an overview+detail interface of game phases, and their aggregation into a series of connected visualizations, each visualization being tailored for actions such as a series of passes or a goal attempt. To evaluate our tool, we ran two qualitative user studies on recent games using SoccerStories with data from one of the world's leading live sports data providers. The first study resulted in a series of four articles on soccer tactics, by a tactics analyst, who said he would not have been able to write these otherwise. The second study consisted in an exploratory follow-up to investigate design alternatives for embedding soccer phases into word-sized graphics. For both experiments, we received a very enthusiastic feedback and participants consider further use of SoccerStories to enhance their current workflow.

  15. Validating an Air Traffic Management Concept of Operation Using Statistical Modeling

    NASA Technical Reports Server (NTRS)

    He, Yuning; Davies, Misty Dawn

    2013-01-01

    Validating a concept of operation for a complex, safety-critical system (like the National Airspace System) is challenging because of the high dimensionality of the controllable parameters and the infinite number of states of the system. In this paper, we use statistical modeling techniques to explore the behavior of a conflict detection and resolution algorithm designed for the terminal airspace. These techniques predict the robustness of the system simulation to both nominal and off-nominal behaviors within the overall airspace. They also can be used to evaluate the output of the simulation against recorded airspace data. Additionally, the techniques carry with them a mathematical value of the worth of each prediction-a statistical uncertainty for any robustness estimate. Uncertainty Quantification (UQ) is the process of quantitative characterization and ultimately a reduction of uncertainties in complex systems. UQ is important for understanding the influence of uncertainties on the behavior of a system and therefore is valuable for design, analysis, and verification and validation. In this paper, we apply advanced statistical modeling methodologies and techniques on an advanced air traffic management system, namely the Terminal Tactical Separation Assured Flight Environment (T-TSAFE). We show initial results for a parameter analysis and safety boundary (envelope) detection in the high-dimensional parameter space. For our boundary analysis, we developed a new sequential approach based upon the design of computer experiments, allowing us to incorporate knowledge from domain experts into our modeling and to determine the most likely boundary shapes and its parameters. We carried out the analysis on system parameters and describe an initial approach that will allow us to include time-series inputs, such as the radar track data, into the analysis

  16. Design of a New Concentration Series for the Orthogonal Sample Design Approach and Estimation of the Number of Reactions in Chemical Systems.

    PubMed

    Shi, Jiajia; Liu, Yuhai; Guo, Ran; Li, Xiaopei; He, Anqi; Gao, Yunlong; Wei, Yongju; Liu, Cuige; Zhao, Ying; Xu, Yizhuang; Noda, Isao; Wu, Jinguang

    2015-11-01

    A new concentration series is proposed for the construction of a two-dimensional (2D) synchronous spectrum for orthogonal sample design analysis to probe intermolecular interaction between solutes dissolved in the same solutions. The obtained 2D synchronous spectrum possesses the following two properties: (1) cross peaks in the 2D synchronous spectra can be used to reflect intermolecular interaction reliably, since interference portions that have nothing to do with intermolecular interaction are completely removed, and (2) the two-dimensional synchronous spectrum produced can effectively avoid accidental collinearity. Hence, the correct number of nonzero eigenvalues can be obtained so that the number of chemical reactions can be estimated. In a real chemical system, noise present in one-dimensional spectra may also produce nonzero eigenvalues. To get the correct number of chemical reactions, we classified nonzero eigenvalues into significant nonzero eigenvalues and insignificant nonzero eigenvalues. Significant nonzero eigenvalues can be identified by inspecting the pattern of the corresponding eigenvector with help of the Durbin-Watson statistic. As a result, the correct number of chemical reactions can be obtained from significant nonzero eigenvalues. This approach provides a solid basis to obtain insight into subtle spectral variations caused by intermolecular interaction.

  17. Event coincidence analysis for quantifying statistical interrelationships between event time series. On the role of flood events as triggers of epidemic outbreaks

    NASA Astrophysics Data System (ADS)

    Donges, J. F.; Schleussner, C.-F.; Siegmund, J. F.; Donner, R. V.

    2016-05-01

    Studying event time series is a powerful approach for analyzing the dynamics of complex dynamical systems in many fields of science. In this paper, we describe the method of event coincidence analysis to provide a framework for quantifying the strength, directionality and time lag of statistical interrelationships between event series. Event coincidence analysis allows to formulate and test null hypotheses on the origin of the observed interrelationships including tests based on Poisson processes or, more generally, stochastic point processes with a prescribed inter-event time distribution and other higher-order properties. Applying the framework to country-level observational data yields evidence that flood events have acted as triggers of epidemic outbreaks globally since the 1950s. Facing projected future changes in the statistics of climatic extreme events, statistical techniques such as event coincidence analysis will be relevant for investigating the impacts of anthropogenic climate change on human societies and ecosystems worldwide.

  18. Speleothem stable isotope records for east-central Europe: resampling sedimentary proxy records to obtain evenly spaced time series with spectral guidance

    NASA Astrophysics Data System (ADS)

    Gábor Hatvani, István; Kern, Zoltán; Leél-Őssy, Szabolcs; Demény, Attila

    2018-01-01

    Uneven spacing is a common feature of sedimentary paleoclimate records, in many cases causing difficulties in the application of classical statistical and time series methods. Although special statistical tools do exist to assess unevenly spaced data directly, the transformation of such data into a temporally equidistant time series which may then be examined using commonly employed statistical tools remains, however, an unachieved goal. The present paper, therefore, introduces an approach to obtain evenly spaced time series (using cubic spline fitting) from unevenly spaced speleothem records with the application of a spectral guidance to avoid the spectral bias caused by interpolation and retain the original spectral characteristics of the data. The methodology was applied to stable carbon and oxygen isotope records derived from two stalagmites from the Baradla Cave (NE Hungary) dating back to the late 18th century. To show the benefit of the equally spaced records to climate studies, their coherence with climate parameters is explored using wavelet transform coherence and discussed. The obtained equally spaced time series are available at https://doi.org/10.1594/PANGAEA.875917.

  19. Statistical Analysis of Categorical Time Series of Atmospheric Elementary Circulation Mechanisms - Dzerdzeevski Classification for the Northern Hemisphere

    PubMed Central

    Brenčič, Mihael

    2016-01-01

    Northern hemisphere elementary circulation mechanisms, defined with the Dzerdzeevski classification and published on a daily basis from 1899–2012, are analysed with statistical methods as continuous categorical time series. Classification consists of 41 elementary circulation mechanisms (ECM), which are assigned to calendar days. Empirical marginal probabilities of each ECM were determined. Seasonality and the periodicity effect were investigated with moving dispersion filters and randomisation procedure on the ECM categories as well as with the time analyses of the ECM mode. The time series were determined as being non-stationary with strong time-dependent trends. During the investigated period, periodicity interchanges with periods when no seasonality is present. In the time series structure, the strongest division is visible at the milestone of 1986, showing that the atmospheric circulation pattern reflected in the ECM has significantly changed. This change is result of the change in the frequency of ECM categories; before 1986, the appearance of ECM was more diverse, and afterwards fewer ECMs appear. The statistical approach applied to the categorical climatic time series opens up new potential insight into climate variability and change studies that have to be performed in the future. PMID:27116375

  20. Search for Correlated Fluctuations in the Beta+ Decay of Na-22

    NASA Astrophysics Data System (ADS)

    Silverman, M. P.; Strange, W.

    2008-10-01

    Claims for a ``cosmogenic'' force that correlates otherwise independent stochastic events have been made for at least 10 years, based largely on visual inspection of time series of histograms whose shapes were interpreted as suggestive of recurrent patterns with semi-diurnal, diurnal, and monthly periods. Building on our earlier work to test randomness of different nuclear decay processes, we have searched for correlations in the time-series of coincident positron-electron annihilations deriving from beta+ decay of Na-22. Disintegrations were counted within a narrow time window over a period of 7 days, leading to a time series of more than 1 million events. Statistical tests were performed on the raw time series, its correlation function, and its Fourier transform to search for cyclic correlations indicative of quantum-mechanical violating deviations from Poisson statistics. The time series was then partitioned into a sequence of 167 ``bags'' each of 8192 events. A histogram was made of the events of each bag, where contiguous frequency classes differed by a single count. The chronological sequence of histograms was then tested for correlations within classes. In all cases the results of the tests were in accord with statistical control, giving no evidence of correlated fluctuations.

  1. Statistical Analysis of Categorical Time Series of Atmospheric Elementary Circulation Mechanisms - Dzerdzeevski Classification for the Northern Hemisphere.

    PubMed

    Brenčič, Mihael

    2016-01-01

    Northern hemisphere elementary circulation mechanisms, defined with the Dzerdzeevski classification and published on a daily basis from 1899-2012, are analysed with statistical methods as continuous categorical time series. Classification consists of 41 elementary circulation mechanisms (ECM), which are assigned to calendar days. Empirical marginal probabilities of each ECM were determined. Seasonality and the periodicity effect were investigated with moving dispersion filters and randomisation procedure on the ECM categories as well as with the time analyses of the ECM mode. The time series were determined as being non-stationary with strong time-dependent trends. During the investigated period, periodicity interchanges with periods when no seasonality is present. In the time series structure, the strongest division is visible at the milestone of 1986, showing that the atmospheric circulation pattern reflected in the ECM has significantly changed. This change is result of the change in the frequency of ECM categories; before 1986, the appearance of ECM was more diverse, and afterwards fewer ECMs appear. The statistical approach applied to the categorical climatic time series opens up new potential insight into climate variability and change studies that have to be performed in the future.

  2. Statistical Analysis of Time-Series from Monitoring of Active Volcanic Vents

    NASA Astrophysics Data System (ADS)

    Lachowycz, S.; Cosma, I.; Pyle, D. M.; Mather, T. A.; Rodgers, M.; Varley, N. R.

    2016-12-01

    Despite recent advances in the collection and analysis of time-series from volcano monitoring, and the resulting insights into volcanic processes, challenges remain in forecasting and interpreting activity from near real-time analysis of monitoring data. Statistical methods have potential to characterise the underlying structure and facilitate intercomparison of these time-series, and so inform interpretation of volcanic activity. We explore the utility of multiple statistical techniques that could be widely applicable to monitoring data, including Shannon entropy and detrended fluctuation analysis, by their application to various data streams from volcanic vents during periods of temporally variable activity. Each technique reveals changes through time in the structure of some of the data that were not apparent from conventional analysis. For example, we calculate the Shannon entropy (a measure of the randomness of a signal) of time-series from the recent dome-forming eruptions of Volcán de Colima (Mexico) and Soufrière Hills (Montserrat). The entropy of real-time seismic measurements and the count rate of certain volcano-seismic event types from both volcanoes is found to be temporally variable, with these data generally having higher entropy during periods of lava effusion and/or larger explosions. In some instances, the entropy shifts prior to or coincident with changes in seismic or eruptive activity, some of which were not clearly recognised by real-time monitoring. Comparison with other statistics demonstrates the sensitivity of the entropy to the data distribution, but that it is distinct from conventional statistical measures such as coefficient of variation. We conclude that each analysis technique examined could provide valuable insights for interpretation of diverse monitoring time-series.

  3. Predicting long-term catchment nutrient export: the use of nonlinear time series models

    NASA Astrophysics Data System (ADS)

    Valent, Peter; Howden, Nicholas J. K.; Szolgay, Jan; Komornikova, Magda

    2010-05-01

    After the Second World War the nitrate concentrations in European water bodies changed significantly as the result of increased nitrogen fertilizer use and changes in land use. However, in the last decades, as a consequence of the implementation of nitrate-reducing measures in Europe, the nitrate concentrations in water bodies slowly decrease. This causes that the mean and variance of the observed time series also changes with time (nonstationarity and heteroscedascity). In order to detect changes and properly describe the behaviour of such time series by time series analysis, linear models (such as autoregressive (AR), moving average (MA) and autoregressive moving average models (ARMA)), are no more suitable. Time series with sudden changes in statistical characteristics can cause various problems in the calibration of traditional water quality models and thus give biased predictions. Proper statistical analysis of these non-stationary and heteroscedastic time series with the aim of detecting and subsequently explaining the variations in their statistical characteristics requires the use of nonlinear time series models. This information can be then used to improve the model building and calibration of conceptual water quality model or to select right calibration periods in order to produce reliable predictions. The objective of this contribution is to analyze two long time series of nitrate concentrations of the rivers Ouse and Stour with advanced nonlinear statistical modelling techniques and compare their performance with traditional linear models of the ARMA class in order to identify changes in the time series characteristics. The time series were analysed with nonlinear models with multiple regimes represented by self-exciting threshold autoregressive (SETAR) and Markov-switching models (MSW). The analysis showed that, based on the value of residual sum of squares (RSS) in both datasets, SETAR and MSW models described the time-series better than models of the ARMA class. In most cases the relative improvement of SETAR models against AR models of first order was low ranging between 1% and 4% with the exception of the three-regime model for the River Stour time-series where the improvement was 48.9%. In comparison, the relative improvement of MSW models was between 44.6% and 52.5 for two-regime and from 60.4% to 75% for three-regime models. However, the visual assessment of models plotted against original datasets showed that despite a high value of RSS, some ARMA models could describe the analyzed time-series better than AR, MA and SETAR models with lower values of RSS. In both datasets MSW models provided a very good visual fit describing most of the extreme values.

  4. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic

    PubMed Central

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals. PMID:27413364

  5. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic.

    PubMed

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals.

  6. General aviation avionics statistics : 1977.

    DOT National Transportation Integrated Search

    1980-06-01

    This report presents avionics statistics for the 1977 general aviation (GA) aircraft fleet and is the fourth in a series. The statistics are presented in a capability group framework which enables one to relate airborne avionics equipment to the capa...

  7. A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.

    2013-07-25

    This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load valuesmore » to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less

  8. Hazard function theory for nonstationary natural hazards

    NASA Astrophysics Data System (ADS)

    Read, L.; Vogel, R. M.

    2015-12-01

    Studies from the natural hazards literature indicate that many natural processes, including wind speeds, landslides, wildfires, precipitation, streamflow and earthquakes, show evidence of nonstationary behavior such as trends in magnitudes through time. Traditional probabilistic analysis of natural hazards based on partial duration series (PDS) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance is constant through time. Given evidence of trends and the consequent expected growth in devastating impacts from natural hazards across the world, new methods are needed to characterize their probabilistic behavior. The field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (x) with its failure time series (t), enabling computation of corresponding average return periods and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose PDS magnitudes are assumed to follow the widely applied Poisson-GP model. We derive a 2-parameter Generalized Pareto hazard model and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series x, with corresponding failure time series t, should have application to a wide class of natural hazards.

  9. Identification of PARMA Models and Their Application to the Modeling of River flows

    NASA Astrophysics Data System (ADS)

    Tesfaye, Y. G.; Meerschaert, M. M.; Anderson, P. L.

    2004-05-01

    The generation of synthetic river flow samples that can reproduce the essential statistical features of historical river flows is essential to the planning, design and operation of water resource systems. Most river flow series are periodically stationary; that is, their mean and covariance functions are periodic with respect to time. We employ a periodic ARMA (PARMA) model. The innovation algorithm can be used to obtain parameter estimates for PARMA models with finite fourth moment as well as infinite fourth moment but finite variance. Anderson and Meerschaert (2003) provide a method for model identification when the time series has finite fourth moment. This article, an extension of the previous work by Anderson and Meerschaert, demonstrates the effectiveness of the technique using simulated data. An application to monthly flow data for the Frazier River in British Columbia is also included to illustrate the use of these methods.

  10. Emergence of patterns in random processes

    NASA Astrophysics Data System (ADS)

    Newman, William I.; Turcotte, Donald L.; Malamud, Bruce D.

    2012-08-01

    Sixty years ago, it was observed that any independent and identically distributed (i.i.d.) random variable would produce a pattern of peak-to-peak sequences with, on average, three events per sequence. This outcome was employed to show that randomness could yield, as a null hypothesis for animal populations, an explanation for their apparent 3-year cycles. We show how we can explicitly obtain a universal distribution of the lengths of peak-to-peak sequences in time series and that this can be employed for long data sets as a test of their i.i.d. character. We illustrate the validity of our analysis utilizing the peak-to-peak statistics of a Gaussian white noise. We also consider the nearest-neighbor cluster statistics of point processes in time. If the time intervals are random, we show that cluster size statistics are identical to the peak-to-peak sequence statistics of time series. In order to study the influence of correlations in a time series, we determine the peak-to-peak sequence statistics for the Langevin equation of kinetic theory leading to Brownian motion. To test our methodology, we consider a variety of applications. Using a global catalog of earthquakes, we obtain the peak-to-peak statistics of earthquake magnitudes and the nearest neighbor interoccurrence time statistics. In both cases, we find good agreement with the i.i.d. theory. We also consider the interval statistics of the Old Faithful geyser in Yellowstone National Park. In this case, we find a significant deviation from the i.i.d. theory which we attribute to antipersistence. We consider the interval statistics using the AL index of geomagnetic substorms. We again find a significant deviation from i.i.d. behavior that we attribute to mild persistence. Finally, we examine the behavior of Standard and Poor's 500 stock index's daily returns from 1928-2011 and show that, while it is close to being i.i.d., there is, again, significant persistence. We expect that there will be many other applications of our methodology both to interoccurrence statistics and to time series.

  11. Statistics of Land-Grant Colleges and Universities, Year Ended June 30, 1956. Bulletin, 1958, No. 2

    ERIC Educational Resources Information Center

    Holladay, Doris C.

    1958-01-01

    The present publication is the 87th issue of the annual statistical report by the Office of Education on land-grant colleges and universities. The first issue presented data for the academic year 1869-70; this issue carries the series through 1955-56. The series is distinguished, first, by its long history of 87 years, and second, by the scope of…

  12. Analysis and stochastic modelling of Intensity-Duration-Frequency relationship from 88 years of 10 min rainfall data in North Spain

    NASA Astrophysics Data System (ADS)

    Delgado, Oihane; Campo-Bescós, Miguel A.; López, J. Javier

    2017-04-01

    Frequently, when we are trying to solve certain hydrological engineering problems, it is often necessary to know rain intensity values related to a specific probability or return period, T. Based on analyses of extreme rainfall events at different time scale aggregation, we can deduce the relationships among Intensity-Duration-Frequency (IDF), that are widely used in hydraulic infrastructure design. However, the lack of long time series of rainfall intensities for smaller time periods, minutes or hours, leads to use mathematical expressions to characterize and extend these curves. One way to deduce them is through the development of synthetic rainfall time series generated from stochastic models, which is evaluated in this work. From recorded accumulated rainfall time series every 10 min in the pluviograph of Igueldo (San Sebastian, Spain) for the time period between 1927-2005, their homogeneity has been checked and possible statistically significant increasing or decreasing trends have also been shown. Subsequently, two models have been calibrated: Bartlett-Lewis and Markov chains models, which are based on the successions of storms, composed for a series of rainfall events, separated by a short interval of time each. Finally, synthetic ten-minute rainfall time series are generated, which allow to estimate detailed IDF curves and compare them with the estimated IDF based on the recorded data.

  13. A 20-year period of orthotopic liver transplantation activity in a single center: a time series analysis performed using the R Statistical Software.

    PubMed

    Santori, G; Andorno, E; Morelli, N; Casaccia, M; Bottino, G; Di Domenico, S; Valente, U

    2009-05-01

    In many Western countries a "minimum volume rule" policy has been adopted as a quality measure for complex surgical procedures. In Italy, the National Transplant Centre set the minimum number of orthotopic liver transplantation (OLT) procedures/y at 25/center. OLT procedures performed in a single center for a reasonably large period may be treated as a time series to evaluate trend, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1987 and December 31, 2006, we performed 563 cadaveric donor OLTs to adult recipients. During 2007, there were another 28 procedures. The greatest numbers of OLTs/y were performed in 2001 (n = 51), 2005 (n = 50), and 2004 (n = 49). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed an incremental trend after exponential smoothing as well as after seasonal decomposition. The predicted OLT/mo for 2007 calculated with the Holt-Winters exponential smoothing applied to the previous period 1987-2006 helped to identify the months where there was a major difference between predicted and performed procedures. The time series approach may be helpful to establish a minimum volume/y at a single-center level.

  14. Mapping quantitative trait loci for binary trait in the F2:3 design.

    PubMed

    Zhu, Chengsong; Zhang, Yuan-Ming; Guo, Zhigang

    2008-12-01

    In the analysis of inheritance of quantitative traits with low heritability, an F(2:3) design that genotypes plants in F(2) and phenotypes plants in F(2:3) progeny is often used in plant genetics. Although statistical approaches for mapping quantitative trait loci (QTL) in the F(2:3) design have been well developed, those for binary traits of biological interest and economic importance are seldom addressed. In this study, an attempt was made to map binary trait loci (BTL) in the F(2:3) design. The fundamental idea was: the F(2) plants were genotyped, all phenotypic values of each F(2:3) progeny were measured for binary trait, and these binary trait values and the marker genotype informations were used to detect BTL under the penetrance and liability models. The proposed method was verified by a series of Monte-Carlo simulation experiments. These results showed that maximum likelihood approaches under the penetrance and liability models provide accurate estimates for the effects and the locations of BTL with high statistical power, even under of low heritability. Moreover, the penetrance model is as efficient as the liability model, and the F(2:3) design is more efficient than classical F(2) design, even though only a single progeny is collected from each F(2:3) family. With the maximum likelihood approaches under the penetrance and the liability models developed in this study, we can map binary traits as we can do for quantitative trait in the F(2:3) design.

  15. Diagnosis of digestive functional disease by the statistics of continuous monitoring of esophageal acidity

    NASA Astrophysics Data System (ADS)

    Rivera Landa, Rogelio; Cardenas Cardenas, Eduardo; Fossion, Ruben; Pérez Zepeda, Mario Ulises

    2014-11-01

    Technological advances in the last few decennia allow the monitoring of many physiological observables in a continuous way, which in physics is called a "time series". The best studied physiological time series is that of the heart rhythm, which can be derived from an electrocardiogram (ECG). Studies have shown that a healthy heart is characterized by a complex time series and high heart rate variability (HRV). In adverse conditions, the cardiac time series degenerates towards randomness (as seen in, e.g., fibrillation) or rigidity (as seen in, e.g., ageing), both corresponding to a loss of HRV as described by, e.g., Golberger et. al [1]. Cardiac and digestive rhythms are regulated by the autonomous nervous system (ANS), that consists of two antagonistic branches, the orthosympathetic branch (ONS) that accelerates the cardiac rhythm but decelerates the digestive system, and the parasympathetic brand (PNS) that works in the opposite way. Because of this reason, one might expect that the statistics of gastro-esophageal time series, as described by Gardner et. al. [2,3], reflects the health state of the digestive system in a similar way as HRV in the cardiac case, described by Minocha et. al. In the present project, we apply statistical methods derived from HRV analysis to time series of esophageal acidity (24h pHmetry). The study is realized on data from a large patient population from the Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán. Our focus is on patients with functional disease (symptoms but no anatomical damage). We find that traditional statistical approaches (e.g. Fourier spectral analysis) are unable to distinguish between different degenerations of the digestive system, such as gastric esophageal reflux disease (GERD) or functional gastrointestinal disorder (FGID).

  16. Joint Seasonal ARMA Approach for Modeling of Load Forecast Errors in Planning Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafen, Ryan P.; Samaan, Nader A.; Makarov, Yuri V.

    2014-04-14

    To make informed and robust decisions in the probabilistic power system operation and planning process, it is critical to conduct multiple simulations of the generated combinations of wind and load parameters and their forecast errors to handle the variability and uncertainty of these time series. In order for the simulation results to be trustworthy, the simulated series must preserve the salient statistical characteristics of the real series. In this paper, we analyze day-ahead load forecast error data from multiple balancing authority locations and characterize statistical properties such as mean, standard deviation, autocorrelation, correlation between series, time-of-day bias, and time-of-day autocorrelation.more » We then construct and validate a seasonal autoregressive moving average (ARMA) model to model these characteristics, and use the model to jointly simulate day-ahead load forecast error series for all BAs.« less

  17. Tsallis statistics and neurodegenerative disorders

    NASA Astrophysics Data System (ADS)

    Iliopoulos, Aggelos C.; Tsolaki, Magdalini; Aifantis, Elias C.

    2016-08-01

    In this paper, we perform statistical analysis of time series deriving from four neurodegenerative disorders, namely epilepsy, amyotrophic lateral sclerosis (ALS), Parkinson's disease (PD), Huntington's disease (HD). The time series are concerned with electroencephalograms (EEGs) of healthy and epileptic states, as well as gait dynamics (in particular stride intervals) of the ALS, PD and HDs. We study data concerning one subject for each neurodegenerative disorder and one healthy control. The analysis is based on Tsallis non-extensive statistical mechanics and in particular on the estimation of Tsallis q-triplet, namely {qstat, qsen, qrel}. The deviation of Tsallis q-triplet from unity indicates non-Gaussian statistics and long-range dependencies for all time series considered. In addition, the results reveal the efficiency of Tsallis statistics in capturing differences in brain dynamics between healthy and epileptic states, as well as differences between ALS, PD, HDs from healthy control subjects. The results indicate that estimations of Tsallis q-indices could be used as possible biomarkers, along with others, for improving classification and prediction of epileptic seizures, as well as for studying the gait complex dynamics of various diseases providing new insights into severity, medications and fall risk, improving therapeutic interventions.

  18. Projections of Education Statistics to 2022. Forty-First Edition. NCES 2014-051

    ERIC Educational Resources Information Center

    Hussar, William J.; Bailey, Tabitha M.

    2014-01-01

    "Projections of Education Statistics to 2022" is the 41st report in a series begun in 1964. It includes statistics on elementary and secondary schools and postsecondary degree-granting institutions. This report provides revisions of projections shown in "Projections of Education Statistics to 2021" and projections of…

  19. Projections of Education Statistics to 2021. Fortieth Edition. NCES 2013-008

    ERIC Educational Resources Information Center

    Hussar, William J.; Bailey, Tabitha M.

    2013-01-01

    "Projections of Education Statistics to 2021" is the 40th report in a series begun in 1964. It includes statistics on elementary and secondary schools and postsecondary degree-granting institutions. This report provides revisions of projections shown in "Projections of Education Statistics to 2020" and projections of…

  20. Structural health monitoring feature design by genetic programming

    NASA Astrophysics Data System (ADS)

    Harvey, Dustin Y.; Todd, Michael D.

    2014-09-01

    Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and other high-capital or life-safety critical structures. Conventional data processing involves pre-processing and extraction of low-dimensional features from in situ time series measurements. The features are then input to a statistical pattern recognition algorithm to perform the relevant classification or regression task necessary to facilitate decisions by the SHM system. Traditional design of signal processing and feature extraction algorithms can be an expensive and time-consuming process requiring extensive system knowledge and domain expertise. Genetic programming, a heuristic program search method from evolutionary computation, was recently adapted by the authors to perform automated, data-driven design of signal processing and feature extraction algorithms for statistical pattern recognition applications. The proposed method, called Autofead, is particularly suitable to handle the challenges inherent in algorithm design for SHM problems where the manifestation of damage in structural response measurements is often unclear or unknown. Autofead mines a training database of response measurements to discover information-rich features specific to the problem at hand. This study provides experimental validation on three SHM applications including ultrasonic damage detection, bearing damage classification for rotating machinery, and vibration-based structural health monitoring. Performance comparisons with common feature choices for each problem area are provided demonstrating the versatility of Autofead to produce significant algorithm improvements on a wide range of problems.

  1. Type I error probabilities based on design-stage strategies with applications to noninferiority trials.

    PubMed

    Rothmann, Mark

    2005-01-01

    When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.

  2. Clinical judgment research on economic topics: Role of congruence of tasks in clinical practice.

    PubMed

    Huttin, Christine C

    2017-01-01

    This paper discusses what can ensure the performance of judgment studies with an information design that integrates economics of medical systems, in the context of digitalization of healthcare. It is part of a series of 5 methodological papers on statistical procedures and problems to implement judgment research designs and decision models, especially to address cost of care, and ways to measure conversation on cost of care between physicians and patients, with unstructured data such as economic narratives to complement billing and financial information (e.g. cost cognitive cues in conjoint or reversed conjoint designs). The paper discusses how congruence of tasks can increase the reliability of data. It uses some results of two Meta reviews of judgment studies in different fields of applications: psychology, business, medical sciences and education. It compares tests for congruence in judgment studies and efficiency tests in econometric studies.

  3. Analyzing self-controlled case series data when case confirmation rates are estimated from an internal validation sample.

    PubMed

    Xu, Stanley; Clarke, Christina L; Newcomer, Sophia R; Daley, Matthew F; Glanz, Jason M

    2018-05-16

    Vaccine safety studies are often electronic health record (EHR)-based observational studies. These studies often face significant methodological challenges, including confounding and misclassification of adverse event. Vaccine safety researchers use self-controlled case series (SCCS) study design to handle confounding effect and employ medical chart review to ascertain cases that are identified using EHR data. However, for common adverse events, limited resources often make it impossible to adjudicate all adverse events observed in electronic data. In this paper, we considered four approaches for analyzing SCCS data with confirmation rates estimated from an internal validation sample: (1) observed cases, (2) confirmed cases only, (3) known confirmation rate, and (4) multiple imputation (MI). We conducted a simulation study to evaluate these four approaches using type I error rates, percent bias, and empirical power. Our simulation results suggest that when misclassification of adverse events is present, approaches such as observed cases, confirmed case only, and known confirmation rate may inflate the type I error, yield biased point estimates, and affect statistical power. The multiple imputation approach considers the uncertainty of estimated confirmation rates from an internal validation sample, yields a proper type I error rate, largely unbiased point estimate, proper variance estimate, and statistical power. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Non-Asbestos Insulation Testing Using a Plasma Torch

    NASA Technical Reports Server (NTRS)

    Morgan, R. E.; Prince, A. S.; Selvidge, S. A.; Phelps, J.; Martin, C. L.; Lawrence, T. W.

    2000-01-01

    Insulation obsolescence issues are a major concern for the Reusable Solid Rocket Motor (RSRM). As old sources of raw materials disappear, new sources must be found and qualified. No simple, inexpensive test presently exists for predicting the erosion performance of a candidate insulation in the full-scale motor, Large motor tests cost million of dollars and therefore can only be used on a few very select candidates. There is a need for a simple, low cost method of screening insulation performance that can simulate some of the different erosion environments found in the RSRM. This paper describes a series of erosion tests on two different non-asbestos insulation formulations, a KEVLAR(registered) fiber-filled and a carbon fiber-filled insulation containing Ethylene-Propylene-Diene Monomer (EPDM) rubber as the binder. The test instrument was a plasma torch device. The two main variables investigated were heat flux and alumina particle impingement concentration. Statistical analysis revealed that the two different formulations had very different responses to the main variable. The results of this work indicate that there may be fundamental differences in how these insulation formulations perform in the motor operating environment. The plasma torch appears to offer a low-cost means of obtaining a fundamental understanding of insulation response to critical factors in a series of statistically designed experiments.

  5. 76 FR 27637 - Supplemental Priorities for Discretionary Grant Programs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-12

    .... Interrupted time series design means a type of quasi-experimental study (as defined in this notice) in which... design is an adaptation of an interrupted time series design that relies on the comparison of treatment... notice), interrupted time series designs (as defined in this notice), or regression discontinuity designs...

  6. 75 FR 47284 - Secretary's Priorities for Discretionary Grant Programs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-05

    ... the most currently available data. Interrupted time series design \\4\\ means a type of quasi... findings. \\4\\ A single subject or single case design is an adaptation of an interrupted time series design...), interrupted time series designs (as defined in this notice), or regression discontinuity designs (as defined...

  7. Statistical properties of fluctuations of time series representing appearances of words in nationwide blog data and their applications: An example of modeling fluctuation scalings of nonstationary time series.

    PubMed

    Watanabe, Hayafumi; Sano, Yukie; Takayasu, Hideki; Takayasu, Misako

    2016-11-01

    To elucidate the nontrivial empirical statistical properties of fluctuations of a typical nonsteady time series representing the appearance of words in blogs, we investigated approximately 3×10^{9} Japanese blog articles over a period of six years and analyze some corresponding mathematical models. First, we introduce a solvable nonsteady extension of the random diffusion model, which can be deduced by modeling the behavior of heterogeneous random bloggers. Next, we deduce theoretical expressions for both the temporal and ensemble fluctuation scalings of this model, and demonstrate that these expressions can reproduce all empirical scalings over eight orders of magnitude. Furthermore, we show that the model can reproduce other statistical properties of time series representing the appearance of words in blogs, such as functional forms of the probability density and correlations in the total number of blogs. As an application, we quantify the abnormality of special nationwide events by measuring the fluctuation scalings of 1771 basic adjectives.

  8. Long-term changes (1980-2003) in total ozone time series over Northern Hemisphere midlatitudes

    NASA Astrophysics Data System (ADS)

    Białek, Małgorzata

    2006-03-01

    Long-term changes in total ozone time series for Arosa, Belsk, Boulder and Sapporo stations are examined. For each station we analyze time series of the following statistical characteristics of the distribution of daily ozone data: seasonal mean, standard deviation, maximum and minimum of total daily ozone values for all seasons. The iterative statistical model is proposed to estimate trends and long-term changes in the statistical distribution of the daily total ozone data. The trends are calculated for the period 1980-2003. We observe lessening of negative trends in the seasonal means as compared to those calculated by WMO for 1980-2000. We discuss a possibility of a change of the distribution shape of ozone daily data using the Kolmogorov-Smirnov test and comparing trend values in the seasonal mean, standard deviation, maximum and minimum time series for the selected stations and seasons. The distribution shift toward lower values without a change in the distribution shape is suggested with the following exceptions: the spreading of the distribution toward lower values for Belsk during winter and no decisive result for Sapporo and Boulder in summer.

  9. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  10. Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure

    PubMed Central

    Berisha, Visar; Wisler, Alan; Hero, Alfred O.; Spanias, Andreas

    2015-01-01

    Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. PMID:26807014

  11. [Gypsy moth Lymantria dispar L. in the South Urals: Patterns in population dynamics and modelling].

    PubMed

    Soukhovolsky, V G; Ponomarev, V I; Sokolov, G I; Tarasova, O V; Krasnoperova, P A

    2015-01-01

    The analysis is conducted on population dynamics of gypsy moth from different habitats of the South Urals. The pattern of cyclic changes in population density is examined, the assessment of temporal conjugation in time series of gypsy moth population dynamics from separate habitats of the South Urals is carried out, the relationships between population density and weather conditions are studied. Based on the results obtained, a statistical model of gypsy moth population dynamics in the South Urals is designed, and estimations are given of regulatory and modifying factors effects on the population dynamics.

  12. 77 FR 64029 - Special Conditions: Airbus Model A318, A319, A320, and A321 Series Airplanes; Design Dive Speed

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-18

    ... Series Airplanes; Design Dive Speed AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final... Model A318, A319, A320, and A321 series airplanes because of a novel or unusual design feature, special... Unusual Design Features The Airbus Model A318, A319, and A320 series airplanes with modification 160500...

  13. Coupling Matched Molecular Pairs with Machine Learning for Virtual Compound Optimization.

    PubMed

    Turk, Samo; Merget, Benjamin; Rippmann, Friedrich; Fulle, Simone

    2017-12-26

    Matched molecular pair (MMP) analyses are widely used in compound optimization projects to gain insights into structure-activity relationships (SAR). The analysis is traditionally done via statistical methods but can also be employed together with machine learning (ML) approaches to extrapolate to novel compounds. The here introduced MMP/ML method combines a fragment-based MMP implementation with different machine learning methods to obtain automated SAR decomposition and prediction. To test the prediction capabilities and model transferability, two different compound optimization scenarios were designed: (1) "new fragments" which occurs when exploring new fragments for a defined compound series and (2) "new static core and transformations" which resembles for instance the identification of a new compound series. Very good results were achieved by all employed machine learning methods especially for the new fragments case, but overall deep neural network models performed best, allowing reliable predictions also for the new static core and transformations scenario, where comprehensive SAR knowledge of the compound series is missing. Furthermore, we show that models trained on all available data have a higher generalizability compared to models trained on focused series and can extend beyond chemical space covered in the training data. Thus, coupling MMP with deep neural networks provides a promising approach to make high quality predictions on various data sets and in different compound optimization scenarios.

  14. Illustrating the practice of statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamada, Christina A; Hamada, Michael S

    2009-01-01

    The practice of statistics involves analyzing data and planning data collection schemes to answer scientific questions. Issues often arise with the data that must be dealt with and can lead to new procedures. In analyzing data, these issues can sometimes be addressed through the statistical models that are developed. Simulation can also be helpful in evaluating a new procedure. Moreover, simulation coupled with optimization can be used to plan a data collection scheme. The practice of statistics as just described is much more than just using a statistical package. In analyzing the data, it involves understanding the scientific problem andmore » incorporating the scientist's knowledge. In modeling the data, it involves understanding how the data were collected and accounting for limitations of the data where possible. Moreover, the modeling is likely to be iterative by considering a series of models and evaluating the fit of these models. Designing a data collection scheme involves understanding the scientist's goal and staying within hislher budget in terms of time and the available resources. Consequently, a practicing statistician is faced with such tasks and requires skills and tools to do them quickly. We have written this article for students to provide a glimpse of the practice of statistics. To illustrate the practice of statistics, we consider a problem motivated by some precipitation data that our relative, Masaru Hamada, collected some years ago. We describe his rain gauge observational study in Section 2. We describe modeling and an initial analysis of the precipitation data in Section 3. In Section 4, we consider alternative analyses that address potential issues with the precipitation data. In Section 5, we consider the impact of incorporating additional infonnation. We design a data collection scheme to illustrate the use of simulation and optimization in Section 6. We conclude this article in Section 7 with a discussion.« less

  15. Assessing the applicability of the Taguchi design method to an interrill erosion study

    NASA Astrophysics Data System (ADS)

    Zhang, F. B.; Wang, Z. L.; Yang, M. Y.

    2015-02-01

    Full-factorial experimental designs have been used in soil erosion studies, but are time, cost and labor intensive, and sometimes they are impossible to conduct due to the increasing number of factors and their levels to consider. The Taguchi design is a simple, economical and efficient statistical tool that only uses a portion of the total possible factorial combinations to obtain the results of a study. Soil erosion studies that use the Taguchi design are scarce and no comparisons with full-factorial designs have been made. In this paper, a series of simulated rainfall experiments using a full-factorial design of five slope lengths (0.4, 0.8, 1.2, 1.6, and 2 m), five slope gradients (18%, 27%, 36%, 48%, and 58%), and five rainfall intensities (48, 62.4, 102, 149, and 170 mm h-1) were conducted. Validation of the applicability of a Taguchi design to interrill erosion experiments was achieved by extracting data from the full dataset according to a theoretical Taguchi design. The statistical parameters for the mean quasi-steady state erosion and runoff rates of each test, the optimum conditions for producing maximum erosion and runoff, and the main effect and percentage contribution of each factor obtained from the full-factorial and Taguchi designs were compared. Both designs generated almost identical results. Using the experimental data from the Taguchi design, it was possible to accurately predict the erosion and runoff rates under the conditions that had been excluded from the Taguchi design. All of the results obtained from analyzing the experimental data for both designs indicated that the Taguchi design could be applied to interrill erosion studies and could replace full-factorial designs. This would save time, labor and costs by generally reducing the number of tests to be conducted. Further work should test the applicability of the Taguchi design to a wider range of conditions.

  16. Parameters Design of Series Resonant Inverter Circuit

    NASA Astrophysics Data System (ADS)

    Qi, Xingkun; Peng, Yonglong; Li, Yabin

    This paper analyzes the main circuit structure of series resonant inverter, and designs the components parameters of the main circuit.That provides a theoretical method for the design of series resonant inverter.

  17. An approach to constructing a homogeneous time series of soil mositure using SMOS

    USDA-ARS?s Scientific Manuscript database

    Overlapping soil moisture time series derived from two satellite microwave radiometers (SMOS, Soil Moisture and Ocean Salinity; AMSR-E, Advanced Microwave Scanning Radiometer - Earth Observing System) are used to generate a soil moisture time series from 2003 to 2010. Two statistical methodologies f...

  18. Regional-scale analysis of extreme precipitation from short and fragmented records

    NASA Astrophysics Data System (ADS)

    Libertino, Andrea; Allamano, Paola; Laio, Francesco; Claps, Pierluigi

    2018-02-01

    Rain gauge is the oldest and most accurate instrument for rainfall measurement, able to provide long series of reliable data. However, rain gauge records are often plagued by gaps, spatio-temporal discontinuities and inhomogeneities that could affect their suitability for a statistical assessment of the characteristics of extreme rainfall. Furthermore, the need to discard the shorter series for obtaining robust estimates leads to ignore a significant amount of information which can be essential, especially when large return periods estimates are sought. This work describes a robust statistical framework for dealing with uneven and fragmented rainfall records on a regional spatial domain. The proposed technique, named "patched kriging" allows one to exploit all the information available from the recorded series, independently of their length, to provide extreme rainfall estimates in ungauged areas. The methodology involves the sequential application of the ordinary kriging equations, producing a homogeneous dataset of synthetic series with uniform lengths. In this way, the errors inherent to any regional statistical estimation can be easily represented in the spatial domain and, possibly, corrected. Furthermore, the homogeneity of the obtained series, provides robustness toward local artefacts during the parameter-estimation phase. The application to a case study in the north-western Italy demonstrates the potential of the methodology and provides a significant base for discussing its advantages over previous techniques.

  19. A primer on the study of transitory dynamics in ecological series using the scale-dependent correlation analysis.

    PubMed

    Rodríguez-Arias, Miquel Angel; Rodó, Xavier

    2004-03-01

    Here we describe a practical, step-by-step primer to scale-dependent correlation (SDC) analysis. The analysis of transitory processes is an important but often neglected topic in ecological studies because only a few statistical techniques appear to detect temporary features accurately enough. We introduce here the SDC analysis, a statistical and graphical method to study transitory processes at any temporal or spatial scale. SDC analysis, thanks to the combination of conventional procedures and simple well-known statistical techniques, becomes an improved time-domain analogue of wavelet analysis. We use several simple synthetic series to describe the method, a more complex example, full of transitory features, to compare SDC and wavelet analysis, and finally we analyze some selected ecological series to illustrate the methodology. The SDC analysis of time series of copepod abundances in the North Sea indicates that ENSO primarily is the main climatic driver of short-term changes in population dynamics. SDC also uncovers some long-term, unexpected features in the population. Similarly, the SDC analysis of Nicholson's blowflies data locates where the proposed models fail and provides new insights about the mechanism that drives the apparent vanishing of the population cycle during the second half of the series.

  20. Data series embedding and scale invariant statistics.

    PubMed

    Michieli, I; Medved, B; Ristov, S

    2010-06-01

    Data sequences acquired from bio-systems such as human gait data, heart rate interbeat data, or DNA sequences exhibit complex dynamics that is frequently described by a long-memory or power-law decay of autocorrelation function. One way of characterizing that dynamics is through scale invariant statistics or "fractal-like" behavior. For quantifying scale invariant parameters of physiological signals several methods have been proposed. Among them the most common are detrended fluctuation analysis, sample mean variance analyses, power spectral density analysis, R/S analysis, and recently in the realm of the multifractal approach, wavelet analysis. In this paper it is demonstrated that embedding the time series data in the high-dimensional pseudo-phase space reveals scale invariant statistics in the simple fashion. The procedure is applied on different stride interval data sets from human gait measurements time series (Physio-Bank data library). Results show that introduced mapping adequately separates long-memory from random behavior. Smaller gait data sets were analyzed and scale-free trends for limited scale intervals were successfully detected. The method was verified on artificially produced time series with known scaling behavior and with the varying content of noise. The possibility for the method to falsely detect long-range dependence in the artificially generated short range dependence series was investigated. (c) 2009 Elsevier B.V. All rights reserved.

  1. 77 FR 18229 - Applications for New Awards; Investing in Innovation Fund, Validation Grants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-27

    ... the appearance of a conflict of interest. Interrupted time series design \\8\\ means a type of quasi... single case design is an adaptation of an interrupted time series design that relies on the comparison of... notice), interrupted time series designs (as defined in this notice), or regression discontinuity designs...

  2. 77 FR 18216 - Applications for New Awards; Investing in Innovation Fund, Scale-Up Grants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-27

    ... evaluation and prevents even the appearance of a conflict of interest. Interrupted time series design \\8... findings. \\8\\ A single subject or single case design is an adaptation of an interrupted time series design... matched comparison group designs (as defined in this notice), interrupted time series designs (as defined...

  3. 76 FR 32171 - Applications for New Awards; Investing in Innovation Fund

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-03

    ... conflict of interest. Interrupted time series design \\5\\ means a type of quasi- experimental study in... single case design is an adaptation of an interrupted time series design that relies on the comparison of...), interrupted time series designs (as defined in this notice), or regression discontinuity designs (as defined...

  4. 76 FR 32159 - Applications for New Awards; Investing in Innovation Fund

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-03

    ... conflict of interest. Interrupted time series design \\5\\ means a type of quasi- experimental study in which... design is an adaptation of an interrupted time series design that relies on the comparison of treatment...), interrupted time series designs (as defined in this notice), or regression discontinuity designs (as defined...

  5. Learning and Celebrating: The Glamour of Design Lecture Series

    ERIC Educational Resources Information Center

    Popov, Lubomir

    2013-01-01

    The purpose of this paper is to highlight the celebratory aspect of the Design Lecture Series, a tradition in architecture schools and interior design programs, its meaning for all constituent parties, and its contributions to creating professional identity and community. The Design Lecture Series is a public event popular in design programs,…

  6. Launch Vehicle Propulsion Design with Multiple Selection Criteria

    NASA Technical Reports Server (NTRS)

    Shelton, Joey D.; Frederick, Robert A.; Wilhite, Alan W.

    2005-01-01

    The approach and techniques described herein define an optimization and evaluation approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system. The method uses Monte Carlo simulations, genetic algorithm solvers, a propulsion thermo-chemical code, power series regression curves for historical data, and statistical models in order to optimize a vehicle system. The system, including parameters for engine chamber pressure, area ratio, and oxidizer/fuel ratio, was modeled and optimized to determine the best design for seven separate design weight and cost cases by varying design and technology parameters. Significant model results show that a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Other key findings show the sensitivity of propulsion parameters, technology factors, and cost factors and how these parameters differ when cost and weight are optimized separately. Each of the three key propulsion parameters; chamber pressure, area ratio, and oxidizer/fuel ratio, are optimized in the seven design cases and results are plotted to show impacts to engine mass and overall vehicle mass.

  7. Visualization of time series statistical data by shape analysis (GDP ratio changes among Asia countries)

    NASA Astrophysics Data System (ADS)

    Shirota, Yukari; Hashimoto, Takako; Fitri Sari, Riri

    2018-03-01

    It has been very significant to visualize time series big data. In the paper we shall discuss a new analysis method called “statistical shape analysis” or “geometry driven statistics” on time series statistical data in economics. In the paper, we analyse the agriculture, value added and industry, value added (percentage of GDP) changes from 2000 to 2010 in Asia. We handle the data as a set of landmarks on a two-dimensional image to see the deformation using the principal components. The point of the analysis method is the principal components of the given formation which are eigenvectors of its bending energy matrix. The local deformation can be expressed as the set of non-Affine transformations. The transformations give us information about the local differences between in 2000 and in 2010. Because the non-Affine transformation can be decomposed into a set of partial warps, we present the partial warps visually. The statistical shape analysis is widely used in biology but, in economics, no application can be found. In the paper, we investigate its potential to analyse the economic data.

  8. Projections of Education Statistics to 2024. Forty-Third Edition. NCES 2016-013

    ERIC Educational Resources Information Center

    Hussar, William J.; Bailey, Tabitha M.

    2016-01-01

    "Projections of Education Statistics to 2024" is the 43rd report in a series begun in 1964. It includes statistics on elementary and secondary schools and degree-granting postsecondary institutions. This report provides revisions of projections shown in Projections of Education Statistics to 2023 and projections of enrollment, graduates,…

  9. Projections of Education Statistics to 2019. Thirty-Eighth Edition. NCES 2011-017

    ERIC Educational Resources Information Center

    Hussar, William J.; Bailey, Tabitha M.

    2011-01-01

    "Projections of Education Statistics to 2019" is the 38th report in a series begun in 1964. It includes statistics on elementary and secondary schools and degree-granting institutions. This report provides revisions of projections shown in "Projections of Education Statistics to 2018." Included are projections of enrollment,…

  10. Projections of Education Statistics to 2020. Thirty-Ninth Edition. NCES 2011-026

    ERIC Educational Resources Information Center

    Hussar, William J.; Bailey, Tabitha M.

    2011-01-01

    "Projections of Education Statistics to 2020" is the 39th report in a series begun in 1964. It includes statistics on elementary and secondary schools and postsecondary degree-granting institutions. This report provides revisions of projections shown in "Projections of Education Statistics to 2019". Included are projections of…

  11. Projections of Education Statistics to 2025. Forty-Fourth Edition. NCES 2017-019

    ERIC Educational Resources Information Center

    Hussar, William J.; Bailey, Tabitha M.

    2017-01-01

    "Projections of Education Statistics to 2025" is the 44th report in a series begun in 1964. It includes statistics on elementary and secondary schools and degree-granting postsecondary institutions. This report provides revisions of projections shown in Projections of Education Statistics to 2024 and projections of enrollment, graduates,…

  12. Projections of Education Statistics to 2023. Forty-Second Edition. NCES 2015-073

    ERIC Educational Resources Information Center

    Hussar, William J.; Bailey, Tabitha M.

    2016-01-01

    "Projections of Education Statistics to 2023" is the 42nd report in a series begun in 1964. It includes statistics on elementary and secondary schools and postsecondary degree-granting institutions. This report provides revisions of projections shown in Projections of Education Statistics to 2022 and projections of enrollment, graduates,…

  13. Characterization of time series via Rényi complexity-entropy curves

    NASA Astrophysics Data System (ADS)

    Jauregui, M.; Zunino, L.; Lenzi, E. K.; Mendes, R. S.; Ribeiro, H. V.

    2018-05-01

    One of the most useful tools for distinguishing between chaotic and stochastic time series is the so-called complexity-entropy causality plane. This diagram involves two complexity measures: the Shannon entropy and the statistical complexity. Recently, this idea has been generalized by considering the Tsallis monoparametric generalization of the Shannon entropy, yielding complexity-entropy curves. These curves have proven to enhance the discrimination among different time series related to stochastic and chaotic processes of numerical and experimental nature. Here we further explore these complexity-entropy curves in the context of the Rényi entropy, which is another monoparametric generalization of the Shannon entropy. By combining the Rényi entropy with the proper generalization of the statistical complexity, we associate a parametric curve (the Rényi complexity-entropy curve) with a given time series. We explore this approach in a series of numerical and experimental applications, demonstrating the usefulness of this new technique for time series analysis. We show that the Rényi complexity-entropy curves enable the differentiation among time series of chaotic, stochastic, and periodic nature. In particular, time series of stochastic nature are associated with curves displaying positive curvature in a neighborhood of their initial points, whereas curves related to chaotic phenomena have a negative curvature; finally, periodic time series are represented by vertical straight lines.

  14. Validation of non-stationary precipitation series for site-specific impact assessment: comparison of two statistical downscaling techniques

    NASA Astrophysics Data System (ADS)

    Mullan, Donal; Chen, Jie; Zhang, Xunchang John

    2016-02-01

    Statistical downscaling (SD) methods have become a popular, low-cost and accessible means of bridging the gap between the coarse spatial resolution at which climate models output climate scenarios and the finer spatial scale at which impact modellers require these scenarios, with various different SD techniques used for a wide range of applications across the world. This paper compares the Generator for Point Climate Change (GPCC) model and the Statistical DownScaling Model (SDSM)—two contrasting SD methods—in terms of their ability to generate precipitation series under non-stationary conditions across ten contrasting global climates. The mean, maximum and a selection of distribution statistics as well as the cumulative frequencies of dry and wet spells for four different temporal resolutions were compared between the models and the observed series for a validation period. Results indicate that both methods can generate daily precipitation series that generally closely mirror observed series for a wide range of non-stationary climates. However, GPCC tends to overestimate higher precipitation amounts, whilst SDSM tends to underestimate these. This infers that GPCC is more likely to overestimate the effects of precipitation on a given impact sector, whilst SDSM is likely to underestimate the effects. GPCC performs better than SDSM in reproducing wet and dry day frequency, which is a key advantage for many impact sectors. Overall, the mixed performance of the two methods illustrates the importance of users performing a thorough validation in order to determine the influence of simulated precipitation on their chosen impact sector.

  15. The Cross-Correlation and Reshuffling Tests in Discerning Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Schultz, Ryan; Telesca, Luciano

    2018-05-01

    In recent years, cases of newly emergent induced clusters have increased seismic hazard and risk in locations with social, environmental, and economic consequence. Thus, the need for a quantitative and robust means to discern induced seismicity has become a critical concern. This paper reviews a Matlab-based algorithm designed to quantify the statistical confidence between two time-series datasets. Similar to prior approaches, our method utilizes the cross-correlation to delineate the strength and lag of correlated signals. In addition, use of surrogate reshuffling tests allows for the dynamic testing against statistical confidence intervals of anticipated spurious correlations. We demonstrate the robust nature of our algorithm in a suite of synthetic tests to determine the limits of accurate signal detection in the presence of noise and sub-sampling. Overall, this routine has considerable merit in terms of delineating the strength of correlated signals, one of which includes the discernment of induced seismicity from natural.

  16. Turbulence statistics with quantified uncertainty in cold-wall supersonic channel flow

    NASA Astrophysics Data System (ADS)

    Ulerich, Rhys; Moser, Robert D.

    2012-11-01

    To investigate compressibility effects in wall-bounded turbulence, a series of direct numerical simulations of compressible channel flow with isothermal (cold) walls have been conducted. All combinations of Re = { 3000 , 5000 } and Ma = { 0 . 1 , 0 . 5 , 1 . 5 , 3 . 0 } have been simulated where the Reynolds and Mach numbers are based on bulk velocity and sound speed at the wall temperature. Turbulence statistics with precisely quantified uncertainties computed from these simulations will be presented and are being made available in a public data base at http://turbulence.ices.utexas.edu/. The simulations were performed using a new pseudo-spectral code called Suzerain, which was designed to efficiently produce high quality data on compressible, wall-bounded turbulent flows using a semi-implicit Fourier/B-spline numerical formulation. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].

  17. Learning investment indicators through data extension

    NASA Astrophysics Data System (ADS)

    Dvořák, Marek

    2017-07-01

    Stock prices in the form of time series were analysed using single and multivariate statistical methods. After simple data preprocessing in the form of logarithmic differences, we augmented this single variate time series to a multivariate representation. This method makes use of sliding windows to calculate several dozen of new variables using simple statistic tools like first and second moments as well as more complicated statistic, like auto-regression coefficients and residual analysis, followed by an optional quadratic transformation that was further used for data extension. These were used as a explanatory variables in a regularized logistic LASSO regression which tried to estimate Buy-Sell Index (BSI) from real stock market data.

  18. 77 FR 11087 - Applications for New Awards; Investing in Innovation Fund, Development Grants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-24

    ... prevents even the appearance of a conflict of interest. Interrupted time series design \\8\\ means a type of.... \\8\\ A single subject or single case design is an adaptation of an interrupted time series design that... matched comparison group designs (as defined in this notice), interrupted time series designs (as defined...

  19. 75 FR 12072 - Office of Innovation and Improvement; Overview Information: Investing in Innovation Fund; Notice...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-12

    ... matched comparison group designs (as defined in this notice), interrupted time series designs (as defined... reading skills administered in the same way to both groups). Interrupted time series design \\3\\ means a... findings. \\3\\ A single subject or single case design is an adaptation of an interrupted time series design...

  20. Factors that Affect Operational Reliability of Turbojet Engines

    NASA Technical Reports Server (NTRS)

    1956-01-01

    The problem of improving operational reliability of turbojet engines is studied in a series of papers. Failure statistics for this engine are presented, the theory and experimental evidence on how engine failures occur are described, and the methods available for avoiding failure in operation are discussed. The individual papers of the series are Objectives, Failure Statistics, Foreign-Object Damage, Compressor Blades, Combustor Assembly, Nozzle Diaphrams, Turbine Buckets, Turbine Disks, Rolling Contact Bearings, Engine Fuel Controls, and Summary Discussion.

  1. The use of segmented regression in analysing interrupted time series studies: an example in pre-hospital ambulance care.

    PubMed

    Taljaard, Monica; McKenzie, Joanne E; Ramsay, Craig R; Grimshaw, Jeremy M

    2014-06-19

    An interrupted time series design is a powerful quasi-experimental approach for evaluating effects of interventions introduced at a specific point in time. To utilize the strength of this design, a modification to standard regression analysis, such as segmented regression, is required. In segmented regression analysis, the change in intercept and/or slope from pre- to post-intervention is estimated and used to test causal hypotheses about the intervention. We illustrate segmented regression using data from a previously published study that evaluated the effectiveness of a collaborative intervention to improve quality in pre-hospital ambulance care for acute myocardial infarction (AMI) and stroke. In the original analysis, a standard regression model was used with time as a continuous variable. We contrast the results from this standard regression analysis with those from segmented regression analysis. We discuss the limitations of the former and advantages of the latter, as well as the challenges of using segmented regression in analysing complex quality improvement interventions. Based on the estimated change in intercept and slope from pre- to post-intervention using segmented regression, we found insufficient evidence of a statistically significant effect on quality of care for stroke, although potential clinically important effects for AMI cannot be ruled out. Segmented regression analysis is the recommended approach for analysing data from an interrupted time series study. Several modifications to the basic segmented regression analysis approach are available to deal with challenges arising in the evaluation of complex quality improvement interventions.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, Richard O.

    The application of statistics to environmental pollution monitoring studies requires a knowledge of statistical analysis methods particularly well suited to pollution data. This book fills that need by providing sampling plans, statistical tests, parameter estimation procedure techniques, and references to pertinent publications. Most of the statistical techniques are relatively simple, and examples, exercises, and case studies are provided to illustrate procedures. The book is logically divided into three parts. Chapters 1, 2, and 3 are introductory chapters. Chapters 4 through 10 discuss field sampling designs and Chapters 11 through 18 deal with a broad range of statistical analysis procedures. Somemore » statistical techniques given here are not commonly seen in statistics book. For example, see methods for handling correlated data (Sections 4.5 and 11.12), for detecting hot spots (Chapter 10), and for estimating a confidence interval for the mean of a lognormal distribution (Section 13.2). Also, Appendix B lists a computer code that estimates and tests for trends over time at one or more monitoring stations using nonparametric methods (Chapters 16 and 17). Unfortunately, some important topics could not be included because of their complexity and the need to limit the length of the book. For example, only brief mention could be made of time series analysis using Box-Jenkins methods and of kriging techniques for estimating spatial and spatial-time patterns of pollution, although multiple references on these topics are provided. Also, no discussion of methods for assessing risks from environmental pollution could be included.« less

  3. Causal Inference and the Comparative Interrupted Time Series Design: Findings from Within-Study Comparisons

    ERIC Educational Resources Information Center

    St. Clair, Travis; Hallberg, Kelly; Cook, Thomas D.

    2014-01-01

    Researchers are increasingly using comparative interrupted time series (CITS) designs to estimate the effects of programs and policies when randomized controlled trials are not feasible. In a simple interrupted time series design, researchers compare the pre-treatment values of a treatment group time series to post-treatment values in order to…

  4. 78 FR 36084 - Special Conditions: The Boeing Company, Model 717-200 Series Airplanes; Seats With Inflatable...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-17

    ... 717- 200 series airplanes. These airplanes will have a novel or unusual design feature [[Page 36085... series airplanes will incorporate the following novel or unusual design features: inflatable lapbelts on... certain novel or unusual design features on one model series of airplanes. It is not a rule of general...

  5. 78 FR 68775 - Special Conditions: Airbus, Model A350-900 Series Airplane; Composite Fuselage In-Flight Fire...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-15

    ... Model A350-900 series airplanes. This airplane will have a novel or unusual design features associated... Model A350-900 series because of a novel or unusual design feature, special conditions are prescribed... Airbus Model A350-900 series airplane will incorporate the following novel or unusual design features...

  6. 78 FR 76980 - Special Conditions: Airbus, A350-900 Series Airplane; Interaction of Systems and Structures

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-20

    ... series airplanes. These airplanes will have novel or unusual design features when compared to the state...-900 series because of a novel or unusual design feature, special conditions are prescribed under Sec...)(2). Novel or Unusual Design Features The Airbus Model A350-900 series will incorporate the following...

  7. A Methodology for the Parametric Reconstruction of Non-Steady and Noisy Meteorological Time Series

    NASA Astrophysics Data System (ADS)

    Rovira, F.; Palau, J. L.; Millán, M.

    2009-09-01

    Climatic and meteorological time series often show some persistence (in time) in the variability of certain features. One could regard annual, seasonal and diurnal time variability as trivial persistence in the variability of some meteorological magnitudes (as, e.g., global radiation, air temperature above surface, etc.). In these cases, the traditional Fourier transform into frequency space will show the principal harmonics as the components with the largest amplitude. Nevertheless, meteorological measurements often show other non-steady (in time) variability. Some fluctuations in measurements (at different time scales) are driven by processes that prevail on some days (or months) of the year but disappear on others. By decomposing a time series into time-frequency space through the continuous wavelet transformation, one is able to determine both the dominant modes of variability and how those modes vary in time. This study is based on a numerical methodology to analyse non-steady principal harmonics in noisy meteorological time series. This methodology combines both the continuous wavelet transform and the development of a parametric model that includes the time evolution of the principal and the most statistically significant harmonics of the original time series. The parameterisation scheme proposed in this study consists of reproducing the original time series by means of a statistically significant finite sum of sinusoidal signals (waves), each defined by using the three usual parameters: amplitude, frequency and phase. To ensure the statistical significance of the parametric reconstruction of the original signal, we propose a standard statistical t-student analysis of the confidence level of the amplitude in the parametric spectrum for the different wave components. Once we have assured the level of significance of the different waves composing the parametric model, we can obtain the statistically significant principal harmonics (in time) of the original time series by using the Fourier transform of the modelled signal. Acknowledgements The CEAM Foundation is supported by the Generalitat Valenciana and BANCAIXA (València, Spain). This study has been partially funded by the European Commission (FP VI, Integrated Project CIRCE - No. 036961) and by the Ministerio de Ciencia e Innovación, research projects "TRANSREG” (CGL2007-65359/CLI) and "GRACCIE” (CSD2007-00067, Program CONSOLIDER-INGENIO 2010).

  8. Graphic analysis and multifractal on percolation-based return interval series

    NASA Astrophysics Data System (ADS)

    Pei, A. Q.; Wang, J.

    2015-05-01

    A financial time series model is developed and investigated by the oriented percolation system (one of the statistical physics systems). The nonlinear and statistical behaviors of the return interval time series are studied for the proposed model and the real stock market by applying visibility graph (VG) and multifractal detrended fluctuation analysis (MF-DFA). We investigate the fluctuation behaviors of return intervals of the model for different parameter settings, and also comparatively study these fluctuation patterns with those of the real financial data for different threshold values. The empirical research of this work exhibits the multifractal features for the corresponding financial time series. Further, the VGs deviated from both of the simulated data and the real data show the behaviors of small-world, hierarchy, high clustering and power-law tail for the degree distributions.

  9. Volatility behavior of visibility graph EMD financial time series from Ising interacting system

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Wang, Jun; Fang, Wen

    2015-08-01

    A financial market dynamics model is developed and investigated by stochastic Ising system, where the Ising model is the most popular ferromagnetic model in statistical physics systems. Applying two graph based analysis and multiscale entropy method, we investigate and compare the statistical volatility behavior of return time series and the corresponding IMF series derived from the empirical mode decomposition (EMD) method. And the real stock market indices are considered to be comparatively studied with the simulation data of the proposed model. Further, we find that the degree distribution of visibility graph for the simulation series has the power law tails, and the assortative network exhibits the mixing pattern property. All these features are in agreement with the real market data, the research confirms that the financial model established by the Ising system is reasonable.

  10. HYPE: a WFD tool for the identification of significant and sustained upward trends in groundwater time series

    NASA Astrophysics Data System (ADS)

    Lopez, Benjamin; Croiset, Nolwenn; Laurence, Gourcy

    2014-05-01

    The Water Framework Directive 2006/11/CE (WFD) on the protection of groundwater against pollution and deterioration asks Member States to identify significant and sustained upward trends in all bodies or groups of bodies of groundwater that are characterised as being at risk in accordance with Annex II to Directive 2000/60/EC. The Directive indicates that the procedure for the identification of significant and sustained upward trends must be based on a statistical method. Moreover, for significant increases of concentrations of pollutants, trend reversals are identified as being necessary. This means to be able to identify significant trend reversals. A specific tool, named HYPE, has been developed in order to help stakeholders working on groundwater trend assessment. The R encoded tool HYPE provides statistical analysis of groundwater time series. It follows several studies on the relevancy of the use of statistical tests on groundwater data series (Lopez et al., 2011) and other case studies on the thematic (Bourgine et al., 2012). It integrates the most powerful and robust statistical tests for hydrogeological applications. HYPE is linked to the French national database on groundwater data (ADES). So monitoring data gathered by the Water Agencies can be directly processed. HYPE has two main modules: - a characterisation module, which allows to visualize time series. HYPE calculates the main statistical characteristics and provides graphical representations; - a trend module, which identifies significant breaks, trends and trend reversals in time series, providing result table and graphical representation (cf figure). Additional modules are also implemented to identify regional and seasonal trends and to sample time series in a relevant way. HYPE has been used successfully in 2012 by the French Water Agencies to satisfy requirements of the WFD, concerning characterization of groundwater bodies' qualitative status and evaluation of the risk of non-achievement of good status. Bourgine B. et al. 2012, Ninth International Geostatistics Congress, Oslo, Norway June 11 - 15. Lopez B. et al. 2011, Final Report BRGM/RP-59515-FR. 166p.

  11. Reporting clinical outcomes of breast reconstruction: a systematic review.

    PubMed

    Potter, S; Brigic, A; Whiting, P F; Cawthorn, S J; Avery, K N L; Donovan, J L; Blazeby, J M

    2011-01-05

    Breast reconstruction after mastectomy for cancer requires accurate evaluation to inform evidence-based participatory decision making, but the standards of outcome reporting after breast reconstruction have not previously been considered. We used extensive searches to identify articles reporting surgical outcomes of breast reconstruction. We extracted data using published criteria for complication reporting modified to reflect reconstructive practice. Study designs included randomized controlled trials, cohort studies, and case series. The Cochrane Risk of Bias tool was used to critically appraise all study designs. Other criteria used to assess the studies were selection and funding bias, statistical power calculations, and institutional review board approval. Wilcoxon signed rank tests were used to compare the breadth and frequency of study outcomes, and χ² tests were used to compare the number of studies in each group reporting each of the published criteria. All statistical tests were two-sided. Surgical complications following breast reconstruction in 42,146 women were evaluated in 134 studies. These included 11 (8.2%) randomized trials, 74 (55.2%) cohort studies, and 49 (36.6%) case series. Fifty-three percent of studies demonstrated a disparity between methods and results in the numbers of complications reported. Complications were defined by 87 (64.9%) studies and graded by 78 (58.2%). Details such as the duration of follow-up and risk factors for adverse outcomes were omitted from 47 (35.1%) and 58 (43.3%) studies, respectively. Overall, the studies defined fewer than 20% of the complications they reported, and the definitions were largely inconsistent. The results of this systematic review suggest that outcome reporting in breast reconstruction is inconsistent and lacks methodological rigor. The development of a standardized core outcome set is recommended to improve outcome reporting in breast reconstruction.

  12. Schools and Staffing Survey (SASS): 1995. Selected Papers Presented at the Meeting of the American Statistical Association (Orlando, Florida, August 13-17, 1996). Working Paper Series.

    ERIC Educational Resources Information Center

    National Center for Education Statistics (ED), Washington, DC.

    The papers were presented at the Social Statistics Section, the Government Statistics Section, and the Section on Survey Research Methods. The following papers are included in the Social Statistics Section and Government Statistics Section, "Overcoming the Bureaucratic Paradigm: Memorial Session in Honor of Roger Herriot": "1995…

  13. The Consequences of Model Misidentification in the Interrupted Time-Series Experiment.

    ERIC Educational Resources Information Center

    Padia, William L.

    Campbell (l969) argued for the interrupted time-series experiment as a useful methodology for testing intervention effects in the social sciences. The validity of the statistical hypothesis testing of time-series, is, however, dependent upon the proper identification of the underlying stochastic nature of the data. Several types of model…

  14. The midline central artery forehead flap: a valid alternative to supratrochlear-based forehead flaps.

    PubMed

    Faris, Callum; van der Eerden, Paul; Vuyk, Hade

    2015-01-01

    This study clarifies the pedicle geometry and vascular supply of a midline forehead flap for nasal reconstruction. It reports on the vascular reliability of this flap and its ability to reduce hair transposition to the nose, a major complicating factor of previous forehead flap designs. To compare the vascular reliability of 3 different pedicle designs of the forehead flap in nasal reconstruction (classic paramedian, glabellar paramedian, and central artery flap design) and evaluate hair transposition rates and aesthetic results. Retrospective analysis of patient data and outcomes retrieved from computer files generated at the time of surgery, supplemented by data from the patient medical records and photographic documentation, from a tertiary referral nasal reconstructive practice, within a secondary-care hospital setting. The study population included all consecutive patients over a 19-year period who underwent primary forehead flap repair of nasal defects, with more than 3 months of postoperative follow-up and photographic documentation. Three sequential forehead flap patterns were used (classic paramedian flap, glabella flap, and central artery flap) for nasal reconstruction over the study duration. Data collected included patient characteristics, method of repair, complications, functional outcome, and patient satisfaction score. For cosmetic outcome, photographic documentation was scored by a medical juror. No forehead flap had vascular compromise in the first stage. Partial flap necrosis was reported in subsequent stages in 4 patients (1%), with no statistical difference in the rate of vascular compromise between the 3 flap designs. Hair transposition to the nose was lower in the central artery forehead flap (7%) compared with the classic paramedian (23%) and glabellar paramedian (13%) flaps (P < .05). Photographic evaluation in 227 patients showed that brow position (98%) and color match (83%) were good in the majority of the patients. In this series, the central artery forehead flap was as reliable (in terms of vascularity) as the glabellar and classic paramedian forehead flap. Its use resulted in a statistically significant reduction in transfer of hair to the nose in our series. 3.

  15. Evolutionary cost analysis of valsartan initiation among patients with hypertension: a time series approach.

    PubMed

    Sun, Peter; Chang, Joanne; Zhang, Jie; Kahler, Kristijan H

    2012-01-01

    This study examines the evolutionary impact of valsartan initiation on medical costs. A retrospective time series study design was used with a large, US national commercial claims database for the period of 2004-2008. Hypertensive patients who initiated valsartan between the ages of 18 and 63, and had continuous enrollment for 24-month pre-initiation period and 24-month post-initiation period were selected. Patients' monthly medical costs were calculated based on individual claims. A novel time series model was devised with monthly medical costs as its dependent variables, autoregressive integrated moving average (ARIMA) as its stochastic components, and four indicative variables as its decomposed interventional components. The number of post-initiation months before a cost-offset point was also assessed. Patients (n = 18,269) had mean age of 53 at the initiation date, and 53% of them were female. The most common co-morbid conditions were dyslipidemia (52%), diabetes (24%), and hypertensive complications (17%). The time series model suggests that medical costs were increasing by approximately $10 per month (p < 0.01) before the initiation, and decreasing by approximately $6 per month (p < 0.01) after the initiation. After the 4th post-initiation month, medical costs for patients with the initiation were statistically significantly lower (p < 0.01) than forecasted medical costs for the same patients without the initiation. The study has its limitations in data representativeness, ability to collect unrecorded clinical conditions, treatments, and costs, as well as its generalizability to patients with different characteristics. Commercially insured hypertensive patients experienced monthly medical cost increase before valsartan initiation. Based on our model, the evolutionary impact of the initiation on medical costs included a temporary cost surge, a gradual, consistent, and statistically significant cost decrease, and a cost-offset point around the 4th post-initiation month.

  16. Systematization, condensed description, and prediction of sets of anion exchange extraction constants on the basis of their statistical treatment by computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mezhov, E.A.; Reimarov, G.A.; Rubisov, V.N.

    1987-05-01

    On the basis of a statistical treatment of the entire set of published data on anion exchange extraction constants, the authors have refined and expanded the scale of the hydration parameters for the anions ..delta..G/sub hydr/ (the effective free energies of hydration for the anions). The authors have estimated the parameters ..delta..G for 93 anions and the coefficients % for 94 series of extraction systems, which are distinguished within each series only by the nature of the exchanging anions. The series are distinguished from one another by the nature of the cation extraction agent and the diluent.

  17. Applications of statistics to medical science, II overview of statistical procedures for general use.

    PubMed

    Watanabe, Hiroshi

    2012-01-01

    Procedures of statistical analysis are reviewed to provide an overview of applications of statistics for general use. Topics that are dealt with are inference on a population, comparison of two populations with respect to means and probabilities, and multiple comparisons. This study is the second part of series in which we survey medical statistics. Arguments related to statistical associations and regressions will be made in subsequent papers.

  18. Analysis and generation of groundwater concentration time series

    NASA Astrophysics Data System (ADS)

    Crăciun, Maria; Vamoş, Călin; Suciu, Nicolae

    2018-01-01

    Concentration time series are provided by simulated concentrations of a nonreactive solute transported in groundwater, integrated over the transverse direction of a two-dimensional computational domain and recorded at the plume center of mass. The analysis of a statistical ensemble of time series reveals subtle features that are not captured by the first two moments which characterize the approximate Gaussian distribution of the two-dimensional concentration fields. The concentration time series exhibit a complex preasymptotic behavior driven by a nonstationary trend and correlated fluctuations with time-variable amplitude. Time series with almost the same statistics are generated by successively adding to a time-dependent trend a sum of linear regression terms, accounting for correlations between fluctuations around the trend and their increments in time, and terms of an amplitude modulated autoregressive noise of order one with time-varying parameter. The algorithm generalizes mixing models used in probability density function approaches. The well-known interaction by exchange with the mean mixing model is a special case consisting of a linear regression with constant coefficients.

  19. A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Byun, K.; Hamlet, A. F.

    2017-12-01

    There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.

  20. 76 FR 32148 - Applications for New Awards; Investing in Innovation Fund

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-03

    .... Interrupted time series design \\5\\ means a type of quasi- experimental study in which the outcome of interest... interrupted time series design that relies on the comparison of treatment effects on a single subject or group... matched comparison group designs (as defined in this notice), interrupted time series designs (as defined...

  1. County business patterns, 1996 : Kansas

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  2. County business patterns, 1997 : Texas

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  3. County business patterns, 1997 : Connecticut

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  4. County business patterns, 1997 : Georgia

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  5. County business patterns, 1997 : Ohio

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  6. County business patterns, 1997 : Indiana

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  7. County business patterns, 1997 : Nevada

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  8. County business patterns, 1997 : Louisiana

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  9. County business patterns, 1997 : Michigan

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  10. County business patterns, 1997 : Iowa

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  11. County business patterns, 1997 : Florida

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  12. County business patterns, 1997 : Arizona

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  13. County business patterns, 1997 : New York

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  14. County business patterns, 1997 : Illinois

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  15. County business patterns, 1997 : Virginia

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  16. County business patterns, 1997 : North Carolina

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  17. County business patterns, 1997 : Pennsylvania

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  18. County business patterns, 1997 : Minnesota

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  19. County business patterns, 1997 : Alabama

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  20. County business patterns, 1997 : Delaware

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  1. County business patterns, 1997 : Hawaii

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  2. County business patterns, 1997 : Vermont

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  3. County business patterns, 1996 : Indiana

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  4. County business patterns, 1997 : Oregon

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  5. County business patterns, 1997 : New Mexico

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  6. County business patterns, 1996 : Texas

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  7. County business patterns, 1996 : Arizona

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  8. County business patterns, 1997 : Kentucky

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  9. County business patterns, 1996 : North Carolina

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  10. County business patterns, 1997 : Tennessee

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  11. County business patterns, 1996 : New York

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  12. County business patterns, 1996 : California

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  13. County business patterns, 1997 : Puerto Rico

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  14. County business patterns, 1997 : Mississippi

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  15. County business patterns, 1996 : Vermont

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  16. County business patterns, 1996 : Oklahoma

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  17. County business patterns, 1997 : Colorado

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  18. County business patterns, 1996 : Maryland

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  19. County business patterns, 1996 : Wyoming

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  20. County business patterns, 1996 : Missouri

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  1. County business patterns, 1996 : Nevada

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  2. County business patterns, 1997 : Missouri

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  3. County business patterns, 1996 : Rhode Island

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  4. County business patterns, 1996 : Michigan

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  5. County business patterns, 1996 : New Jersey

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  6. County business patterns, 1996 : Arkansas

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  7. County business patterns, 1996 : Nebraska

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  8. County business patterns, 1997 : Utah

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  9. County business patterns, 1997 : Wyoming

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  10. County business patterns, 1997 : Rhode Island

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  11. County business patterns, 1996 : Massachusetts

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  12. County business patterns, 1996 : Iowa

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  13. County business patterns, 1996 : Alabama

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  14. County business patterns, 1997 : West Virginia

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  15. County business patterns, 1997 : Washington

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  16. County business patterns, 1996 : South Dakota

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  17. County business patterns, 1996 : Pennsylvania

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  18. County business patterns, 1996 : Maine

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  19. County business patterns, 1996 : Delaware

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  20. County business patterns, 1997 : Maine

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  1. County business patterns, 1997 : Oklahoma

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  2. County business patterns, 1997 : Wisconsin

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  3. County business patterns, 1997 : Kansas

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  4. County business patterns, 1996 : Hawaii

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  5. County business patterns, 1996 : Alaska

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  6. County business patterns, 1996 : Louisiana

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  7. County business patterns, 1996 : Ohio

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  8. County business patterns, 1996 : Montana

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  9. County business patterns, 1996 : North Dakota

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  10. County business patterns, 1996 : Georgia

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  11. County business patterns, 1996 : New Mexico

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  12. County business patterns, 1996 : Mississippi

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  13. County business patterns, 1997 : Montana

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  14. County business patterns, 1997 : South Dakota

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  15. County business patterns, 1997 : New Jersey

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  16. County business patterns, 1996 : Wisconsin

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  17. County business patterns, 1997 : Nebraska

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  18. County business patterns, 1996 : Florida

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  19. County business patterns, 1996 : Utah

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  20. County business patterns, 1996 : Virginia

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  1. County business patterns, 1996 : Connecticut

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  2. County business patterns, 1996 : Puerto Rico

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  3. County business patterns, 1997 : South Carolina

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  4. County business patterns, 1996 : Idaho

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  5. County business patterns, 1996 : New Hampshire

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  6. County business patterns, 1996 : West Virginia

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  7. County business patterns, 1997 : New Hampshire

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  8. County business patterns, 1996 : Tennessee

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  9. County business patterns, 1997 : Maryland

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  10. County business patterns, 1997 : Massachusetts

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  11. County business patterns, 1997 : Idaho

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  12. County business patterns, 1996 : Colorado

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  13. County business patterns, 1997 : Arkansas

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  14. County business patterns, 1996 : Kentucky

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  15. County business patterns, 1996 : Illinois

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  16. County business patterns, 1996 : Oregon

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  17. County business patterns, 1996 : South Carolina

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  18. County business patterns, 1996 : Minnesota

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  19. County business patterns, 1997 : Alaska

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  20. County business patterns, 1997 : North Dakota

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  1. Highway Statistics 1980

    DOT National Transportation Integrated Search

    1980-01-01

    This publication brings together annual series of selected statistical tabulations relating to highway transportation in three major areas: (1) Highway use--the ownership and operation of motor vehicles; (2) highway finance--the receipts and expendit...

  2. Highway statistics 2007

    DOT National Transportation Integrated Search

    2007-01-01

    This publication brings together annual series of selected statistical tabulations relating to highway transportation in four major areas: (1) highway infrastructure and users; (2) motor vehicles and fuel; (3) highway finance--the receipts and expend...

  3. Light-weight Parallel Python Tools for Earth System Modeling Workflows

    NASA Astrophysics Data System (ADS)

    Mickelson, S. A.; Paul, K.; Xu, H.; Dennis, J.; Brown, D. I.

    2015-12-01

    With the growth in computing power over the last 30 years, earth system modeling codes have become increasingly data-intensive. As an example, it is expected that the data required for the next Intergovernmental Panel on Climate Change (IPCC) Assessment Report (AR6) will increase by more than 10x to an expected 25PB per climate model. Faced with this daunting challenge, developers of the Community Earth System Model (CESM) have chosen to change the format of their data for long-term storage from time-slice to time-series, in order to reduce the required download bandwidth needed for later analysis and post-processing by climate scientists. Hence, efficient tools are required to (1) perform the transformation of the data from time-slice to time-series format and to (2) compute climatology statistics, needed for many diagnostic computations, on the resulting time-series data. To address the first of these two challenges, we have developed a parallel Python tool for converting time-slice model output to time-series format. To address the second of these challenges, we have developed a parallel Python tool to perform fast time-averaging of time-series data. These tools are designed to be light-weight, be easy to install, have very few dependencies, and can be easily inserted into the Earth system modeling workflow with negligible disruption. In this work, we present the motivation, approach, and testing results of these two light-weight parallel Python tools, as well as our plans for future research and development.

  4. Estimation of trends

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The application of statistical methods to recorded ozone measurements. The effects of a long term depletion of ozone at magnitudes predicted by the NAS is harmful to most forms of life. Empirical prewhitening filters the derivation of which is independent of the underlying physical mechanisms were analyzed. Statistical analysis performs a checks and balances effort. Time series filters variations into systematic and random parts, errors are uncorrelated, and significant phase lag dependencies are identified. The use of time series modeling to enhance the capability of detecting trends is discussed.

  5. State transportation profile : summary

    DOT National Transportation Integrated Search

    2003-12-01

    The Bureau of Transportation Statistics (BTS) presents a statistical : profile of transportation in the 50 states and the District of Columbia. : This document supplements a previously published series of individual : state profiles. Like the individ...

  6. Bayesian methods for outliers detection in GNSS time series

    NASA Astrophysics Data System (ADS)

    Qianqian, Zhang; Qingming, Gui

    2013-07-01

    This article is concerned with the problem of detecting outliers in GNSS time series based on Bayesian statistical theory. Firstly, a new model is proposed to simultaneously detect different types of outliers based on the conception of introducing different types of classification variables corresponding to the different types of outliers; the problem of outlier detection is converted into the computation of the corresponding posterior probabilities, and the algorithm for computing the posterior probabilities based on standard Gibbs sampler is designed. Secondly, we analyze the reasons of masking and swamping about detecting patches of additive outliers intensively; an unmasking Bayesian method for detecting additive outlier patches is proposed based on an adaptive Gibbs sampler. Thirdly, the correctness of the theories and methods proposed above is illustrated by simulated data and then by analyzing real GNSS observations, such as cycle slips detection in carrier phase data. Examples illustrate that the Bayesian methods for outliers detection in GNSS time series proposed by this paper are not only capable of detecting isolated outliers but also capable of detecting additive outlier patches. Furthermore, it can be successfully used to process cycle slips in phase data, which solves the problem of small cycle slips.

  7. Structure- and ligand-based structure-activity relationships for a series of inhibitors of aldolase.

    PubMed

    Ferreira, Leonardo G; Andricopulo, Adriano D

    2012-12-01

    Aldolase has emerged as a promising molecular target for the treatment of human African trypanosomiasis. Over the last years, due to the increasing number of patients infected with Trypanosoma brucei, there is an urgent need for new drugs to treat this neglected disease. In the present study, two-dimensional fragment-based quantitative-structure activity relationship (QSAR) models were generated for a series of inhibitors of aldolase. Through the application of leave-one-out and leave-many-out cross-validation procedures, significant correlation coefficients were obtained (r²=0.98 and q²=0.77) as an indication of the statistical internal and external consistency of the models. The best model was employed to predict pKi values for a series of test set compounds, and the predicted values were in good agreement with the experimental results, showing the power of the model for untested compounds. Moreover, structure-based molecular modeling studies were performed to investigate the binding mode of the inhibitors in the active site of the parasitic target enzyme. The structural and QSAR results provided useful molecular information for the design of new aldolase inhibitors within this structural class.

  8. A time series analysis performed on a 25-year period of kidney transplantation activity in a single center.

    PubMed

    Santori, G; Fontana, I; Bertocchi, M; Gasloli, G; Valente, U

    2010-05-01

    Following the example of many Western countries, where a "minimum volume rule" policy has been adopted as a quality parameter for complex surgical procedures, the Italian National Transplant Centre set the minimum number of kidney transplantation procedures/y at 30/center. The number of procedures performed in a single center over a large period may be treated as a time series to evaluate trends, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1983, and December 31, 2007, we performed 1376 procedures in adult or pediatric recipients from living or cadaveric donors. The greatest numbers of cases/y were performed in 1998 (n = 86) followed by 2004 (n = 82), 1996 (n = 75), and 2003 (n = 73). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed a whole incremental trend after exponential smoothing as well as after seasonal decomposition. However, starting from 2005, we observed a decreased trend in the series. The number of kidney transplants expected to be performed for 2008 by using the Holt-Winters exponential smoothing applied to the period 1983 to 2007 suggested 58 procedures, while in that year there were 52. The time series approach may be helpful to establish a minimum volume/y at a single-center level. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  9. Loop series for discrete statistical models on graphs

    NASA Astrophysics Data System (ADS)

    Chertkov, Michael; Chernyak, Vladimir Y.

    2006-06-01

    In this paper we present the derivation details, logic, and motivation for the three loop calculus introduced in Chertkov and Chernyak (2006 Phys. Rev. E 73 065102(R)). Generating functions for each of the three interrelated discrete statistical models are expressed in terms of a finite series. The first term in the series corresponds to the Bethe-Peierls belief-propagation (BP) contribution; the other terms are labelled by loops on the factor graph. All loop contributions are simple rational functions of spin correlation functions calculated within the BP approach. We discuss two alternative derivations of the loop series. One approach implements a set of local auxiliary integrations over continuous fields with the BP contribution corresponding to an integrand saddle-point value. The integrals are replaced by sums in the complementary approach, briefly explained in Chertkov and Chernyak (2006 Phys. Rev. E 73 065102(R)). Local gauge symmetry transformations that clarify an important invariant feature of the BP solution are revealed in both approaches. The individual terms change under the gauge transformation while the partition function remains invariant. The requirement for all individual terms to be nonzero only for closed loops in the factor graph (as opposed to paths with loose ends) is equivalent to fixing the first term in the series to be exactly equal to the BP contribution. Further applications of the loop calculus to problems in statistical physics, computer and information sciences are discussed.

  10. A statistical approach for segregating cognitive task stages from multivariate fMRI BOLD time series.

    PubMed

    Demanuele, Charmaine; Bähner, Florian; Plichta, Michael M; Kirsch, Peter; Tost, Heike; Meyer-Lindenberg, Andreas; Durstewitz, Daniel

    2015-01-01

    Multivariate pattern analysis can reveal new information from neuroimaging data to illuminate human cognition and its disturbances. Here, we develop a methodological approach, based on multivariate statistical/machine learning and time series analysis, to discern cognitive processing stages from functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) time series. We apply this method to data recorded from a group of healthy adults whilst performing a virtual reality version of the delayed win-shift radial arm maze (RAM) task. This task has been frequently used to study working memory and decision making in rodents. Using linear classifiers and multivariate test statistics in conjunction with time series bootstraps, we show that different cognitive stages of the task, as defined by the experimenter, namely, the encoding/retrieval, choice, reward and delay stages, can be statistically discriminated from the BOLD time series in brain areas relevant for decision making and working memory. Discrimination of these task stages was significantly reduced during poor behavioral performance in dorsolateral prefrontal cortex (DLPFC), but not in the primary visual cortex (V1). Experimenter-defined dissection of time series into class labels based on task structure was confirmed by an unsupervised, bottom-up approach based on Hidden Markov Models. Furthermore, we show that different groupings of recorded time points into cognitive event classes can be used to test hypotheses about the specific cognitive role of a given brain region during task execution. We found that whilst the DLPFC strongly differentiated between task stages associated with different memory loads, but not between different visual-spatial aspects, the reverse was true for V1. Our methodology illustrates how different aspects of cognitive information processing during one and the same task can be separated and attributed to specific brain regions based on information contained in multivariate patterns of voxel activity.

  11. 78 FR 75511 - Special Conditions: Bombardier Inc., Models BD-500-1A10 and BD-500-1A11 Series Airplanes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-12

    ... Design Features The C-series airplanes will incorporate the following novel or unusual design features: A.... Conclusion This action affects only certain novel or unusual design features on two model series of airplanes... Inc., Models BD-500-1A10 and BD- 500-1A11 Series Airplanes; Electronic Flight Control System: Control...

  12. Towards Solving the Mixing Problem in the Decomposition of Geophysical Time Series by Independent Component Analysis

    NASA Technical Reports Server (NTRS)

    Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)

    2000-01-01

    The use of the Principal Component Analysis technique for the analysis of geophysical time series has been questioned in particular for its tendency to extract components that mix several physical phenomena even when the signal is just their linear sum. We demonstrate with a data simulation experiment that the Independent Component Analysis, a recently developed technique, is able to solve this problem. This new technique requires the statistical independence of components, a stronger constraint, that uses higher-order statistics, instead of the classical decorrelation a weaker constraint, that uses only second-order statistics. Furthermore, ICA does not require additional a priori information such as the localization constraint used in Rotational Techniques.

  13. National Aquatic Resource Surveys & Statistics: Role of statistics in the development of a national monitoring program

    EPA Science Inventory

    The National Aquatic Resource Surveys (NARS) are a series of four statistical surveys conducted by the U.S. Environmental Protection Agency working in collaboration with states, tribal nations and other federal agencies. The surveys are conducted for lakes and reservoirs, streams...

  14. What's in a Face? Visual Contributions to Speech Segmentation

    ERIC Educational Resources Information Center

    Mitchel, Aaron D.; Weiss, Daniel J.

    2010-01-01

    Recent research has demonstrated that adults successfully segment two interleaved artificial speech streams with incongruent statistics (i.e., streams whose combined statistics are noisier than the encapsulated statistics) only when provided with an indexical cue of speaker voice. In a series of five experiments, our study explores whether…

  15. Digest of Education Statistics, 2005. NCES 2006-030

    ERIC Educational Resources Information Center

    Snyder, Thomas D.; Tan, Alexandra G.; Hoffman, Charlene M.

    2006-01-01

    The 2005 edition of the "Digest of Education Statistics" is the 41st in a series of publications initiated in 1962. Its primary purpose is to provide a compilation of statistical information covering the broad field of American education from prekindergarten through graduate school. The "Digest" includes a selection of data…

  16. Handbook of Labor Statistics. Bulletin 2175.

    ERIC Educational Resources Information Center

    Springsteen, Rosalind, Comp.; Epstein, Rosalie, Comp.

    This publication makes available in one volume the major series produced by the Bureau of Labor Statistics. Technical notes preceding each major section contain information on data changes and explain the services. Forty-four tables derived from the Current Population Survey (CPS) provide statistics on labor force and employment status,…

  17. 76 FR 9331 - Professional Development for Arts Educators Program; Office of Innovation and Improvement...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-17

    ... adaptation of an interrupted time series design that relies on the comparison of treatment effects on a... group designs (as defined in this notice), interrupted time series designs (as defined in this notice... series design means a type of quasi-experimental study (as defined in this notice) in which the outcome...

  18. Construct-a-Greenhouse. Science by Design Series.

    ERIC Educational Resources Information Center

    Lee, Felicia

    This book is one of four books in the Science-by-Design Series created by TERC and funded by the National Science Foundation (NSF). This series presents directed instruction on how to successfully formulate and carry out product design. Students learn and apply concepts in science and technology to design and build a pair of insulated gloves, a…

  19. Analysis of Time-Series Quasi-Experiments. Final Report.

    ERIC Educational Resources Information Center

    Glass, Gene V.; Maguire, Thomas O.

    The objective of this project was to investigate the adequacy of statistical models developed by G. E. P. Box and G. C. Tiao for the analysis of time-series quasi-experiments: (1) The basic model developed by Box and Tiao is applied to actual time-series experiment data from two separate experiments, one in psychology and one in educational…

  20. A Statistical Portrait of Women in the United States. Current Population Reports, Special Studies Series P-32, No. 58.

    ERIC Educational Resources Information Center

    Bureau of the Census (DOC), Suitland, MD.

    This report presents a statistical portrait of the changing role of women in the United States during the 20th century. Data are from United States Government sources--from surveys, decennial censuses, vital statistics, and administrative records. The majority of the statistics have been published previously, either in government documents or…

  1. Use of Tests of Statistical Significance and Other Analytic Choices in a School Psychology Journal: Review of Practices and Suggested Alternatives.

    ERIC Educational Resources Information Center

    Snyder, Patricia A.; Thompson, Bruce

    The use of tests of statistical significance was explored, first by reviewing some criticisms of contemporary practice in the use of statistical tests as reflected in a series of articles in the "American Psychologist" and in the appointment of a "Task Force on Statistical Inference" by the American Psychological Association…

  2. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    PubMed Central

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  3. County business patterns, 1997 : U.S. summary

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  4. County business patterns, 1997 : District of Columbia

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  5. County business patterns, 1996 : District of Columbia

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  6. County business patterns, 1996 : U.S. summary

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  7. Chaos and Forecasting - Proceedings of the Royal Society Discussion Meeting

    NASA Astrophysics Data System (ADS)

    Tong, Howell

    1995-04-01

    The Table of Contents for the full book PDF is as follows: * Preface * Orthogonal Projection, Embedding Dimension and Sample Size in Chaotic Time Series from a Statistical Perspective * A Theory of Correlation Dimension for Stationary Time Series * On Prediction and Chaos in Stochastic Systems * Locally Optimized Prediction of Nonlinear Systems: Stochastic and Deterministic * A Poisson Distribution for the BDS Test Statistic for Independence in a Time Series * Chaos and Nonlinear Forecastability in Economics and Finance * Paradigm Change in Prediction * Predicting Nonuniform Chaotic Attractors in an Enzyme Reaction * Chaos in Geophysical Fluids * Chaotic Modulation of the Solar Cycle * Fractal Nature in Earthquake Phenomena and its Simple Models * Singular Vectors and the Predictability of Weather and Climate * Prediction as a Criterion for Classifying Natural Time Series * Measuring and Characterising Spatial Patterns, Dynamics and Chaos in Spatially-Extended Dynamical Systems and Ecologies * Non-Linear Forecasting and Chaos in Ecology and Epidemiology: Measles as a Case Study

  8. deltaGseg: macrostate estimation via molecular dynamics simulations and multiscale time series analysis.

    PubMed

    Low, Diana H P; Motakis, Efthymios

    2013-10-01

    Binding free energy calculations obtained through molecular dynamics simulations reflect intermolecular interaction states through a series of independent snapshots. Typically, the free energies of multiple simulated series (each with slightly different starting conditions) need to be estimated. Previous approaches carry out this task by moving averages at certain decorrelation times, assuming that the system comes from a single conformation description of binding events. Here, we discuss a more general approach that uses statistical modeling, wavelets denoising and hierarchical clustering to estimate the significance of multiple statistically distinct subpopulations, reflecting potential macrostates of the system. We present the deltaGseg R package that performs macrostate estimation from multiple replicated series and allows molecular biologists/chemists to gain physical insight into the molecular details that are not easily accessible by experimental techniques. deltaGseg is a Bioconductor R package available at http://bioconductor.org/packages/release/bioc/html/deltaGseg.html.

  9. Highway Statistics 1950

    DOT National Transportation Integrated Search

    1952-01-01

    This publication, the sixth of an annual series, presents the 1950 statistical and analytical tables of general interest on motor fuel, motor vehicles, highway-user taxation, financing of State highways, highway mileage, and Federal aid for highways....

  10. Highway Statistics 1960

    DOT National Transportation Integrated Search

    1960-01-01

    This publication, the sixteenth of an annual series, presents the 1960 statistical and analytical tables of general interest on motor fuel, motor vehicles, highway-user taxation, State and local highway finance, highway mileage, and Federal aid for h...

  11. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling.

    PubMed

    Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall

    2016-01-01

    Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.

  12. Stochastic rainfall synthesis for urban applications using different regionalization methods

    NASA Astrophysics Data System (ADS)

    Callau Poduje, A. C.; Leimbach, S.; Haberlandt, U.

    2017-12-01

    The proper design and efficient operation of urban drainage systems require long and continuous rainfall series in a high temporal resolution. Unfortunately, these time series are usually available in a few locations and it is therefore suitable to develop a stochastic precipitation model to generate rainfall in locations without observations. The model presented is based on an alternating renewal process and involves an external and an internal structure. The members of these structures are described by probability distributions which are site specific. Different regionalization methods based on site descriptors are presented which are used for estimating the distributions for locations without observations. Regional frequency analysis, multiple linear regressions and a vine-copula method are applied for this purpose. An area located in the north-west of Germany is used to compare the different methods and involves a total of 81 stations with 5 min rainfall records. The site descriptors include information available for the whole region: position, topography and hydrometeorologic characteristics which are estimated from long term observations. The methods are compared directly by cross validation of different rainfall statistics. Given that the model is stochastic the evaluation is performed based on ensembles of many long synthetic time series which are compared with observed ones. The performance is as well indirectly evaluated by setting up a fictional urban hydrological system to test the capability of the different methods regarding flooding and overflow characteristics. The results show a good representation of the seasonal variability and good performance in reproducing the sample statistics of the rainfall characteristics. The copula based method shows to be the most robust of the three methods. Advantages and disadvantages of the different methods are presented and discussed.

  13. Power estimation using simulations for air pollution time-series studies

    PubMed Central

    2012-01-01

    Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599

  14. Power estimation using simulations for air pollution time-series studies.

    PubMed

    Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt

    2012-09-20

    Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.

  15. Cardiovascular oscillations at the bedside: early diagnosis of neonatal sepsis using heart rate characteristics monitoring

    PubMed Central

    Moorman, J. Randall; Delos, John B.; Flower, Abigail A.; Cao, Hanqing; Kovatchev, Boris P.; Richman, Joshua S.; Lake, Douglas E.

    2014-01-01

    We have applied principles of statistical signal processing and non-linear dynamics to analyze heart rate time series from premature newborn infants in order to assist in the early diagnosis of sepsis, a common and potentially deadly bacterial infection of the bloodstream. We began with the observation of reduced variability and transient decelerations in heart rate interval time series for hours up to days prior to clinical signs of illness. We find that measurements of standard deviation, sample asymmetry and sample entropy are highly related to imminent clinical illness. We developed multivariable statistical predictive models, and an interface to display the real-time results to clinicians. Using this approach, we have observed numerous cases in which incipient neonatal sepsis was diagnosed and treated without any clinical illness at all. This review focuses on the mathematical and statistical time series approaches used to detect these abnormal heart rate characteristics and present predictive monitoring information to the clinician. PMID:22026974

  16. Design of Interrogation Protocols for Radiation Dose Measurements Using Optically-Stimulated Luminescent Dosimeters.

    PubMed

    Abraham, Sara A; Kearfott, Kimberlee J; Jawad, Ali H; Boria, Andrew J; Buth, Tobias J; Dawson, Alexander S; Eng, Sheldon C; Frank, Samuel J; Green, Crystal A; Jacobs, Mitchell L; Liu, Kevin; Miklos, Joseph A; Nguyen, Hien; Rafique, Muhammad; Rucinski, Blake D; Smith, Travis; Tan, Yanliang

    2017-03-01

    Optically-stimulated luminescent dosimeters are capable of being interrogated multiple times post-irradiation. Each interrogation removes a fraction of the signal stored within the optically-stimulated luminescent dosimeter. This signal loss must be corrected to avoid systematic errors in estimating the average signal of a series of optically-stimulated luminescent dosimeter interrogations and requires a minimum number of consecutive readings to determine an average signal that is within a desired accuracy of the true signal with a desired statistical confidence. This paper establishes a technical basis for determining the required number of readings for a particular application of these dosimeters when using certain OSL dosimetry systems.

  17. Trend extraction using empirical mode decomposition and statistical empirical mode decomposition: Case study: Kuala Lumpur stock market

    NASA Astrophysics Data System (ADS)

    Jaber, Abobaker M.

    2014-12-01

    Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.

  18. Eruption patterns of the chilean volcanoes Villarrica, Llaima, and Tupungatito

    NASA Astrophysics Data System (ADS)

    Muñoz, Miguel

    1983-09-01

    The historical eruption records of three Chilean volcanoes have been subjected to many statistical tests, and none have been found to differ significantly from random, or Poissonian, behaviour. The statistical analysis shows rough conformity with the descriptions determined from the eruption rate functions. It is possible that a constant eruption rate describes the activity of Villarrica; Llaima and Tupungatito present complex eruption rate patterns that appear, however, to have no statistical significance. Questions related to loading and extinction processes and to the existence of shallow secondary magma chambers to which magma is supplied from a deeper system are also addressed. The analysis and the computation of the serial correlation coefficients indicate that the three series may be regarded as stationary renewal processes. None of the test statistics indicates rejection of the Poisson hypothesis at a level less than 5%, but the coefficient of variation for the eruption series at Llaima is significantly different from the value expected for a Poisson process. Also, the estimates of the normalized spectrum of the counting process for the three series suggest a departure from the random model, but the deviations are not found to be significant at the 5% level. Kolmogorov-Smirnov and chi-squared test statistics, applied directly to ascertaining to which probability P the random Poisson model fits the data, indicate that there is significant agreement in the case of Villarrica ( P=0.59) and Tupungatito ( P=0.3). Even though the P-value for Llaima is a marginally significant 0.1 (which is equivalent to rejecting the Poisson model at the 90% confidence level), the series suggests that nonrandom features are possibly present in the eruptive activity of this volcano.

  19. The Total Ozone Series of Arosa: History, Homogenization and new results using statistical extreme value theory

    NASA Astrophysics Data System (ADS)

    Staehelin, J.; Rieder, H. E.; Maeder, J. A.; Ribatet, M.; Davison, A. C.; Stübi, R.

    2009-04-01

    Atmospheric ozone protects the biota living at the Earth's surface from harmful solar UV-B and UV-C radiation. The global ozone shield is expected to gradually recover from the anthropogenic disturbance of ozone depleting substances (ODS) in the coming decades. The stratospheric ozone layer at extratropics might significantly increase above the thickness of the chemically undisturbed atmosphere which might enhance ozone concentrations at the tropopause altitude where ozone is an important greenhouse gas. At Arosa, a resort village in the Swiss Alps, total ozone measurements started in 1926 leading to the longest total ozone series of the world. One Fery spectrograph and seven Dobson spectrophotometers were operated at Arosa and the method used to homogenize the series will be presented. Due to its unique length the series allows studying total ozone in the chemically undisturbed as well as in the ODS loaded stratosphere. The series is particularly valuable to study natural variability in the period prior to 1970, when ODS started to affect stratospheric ozone. Concepts developed by extreme value statistics allow objective definitions of "ozone extreme high" and "ozone extreme low" values by fitting the (daily mean) time series using the Generalized Pareto Distribution (GPD). Extreme high ozone events can be attributed to effects of ElNino and/or NAO, whereas in the chemically disturbed stratosphere high frequencies of extreme low total ozone values simultaneously occur with periods of strong polar ozone depletion (identified by statistical modeling with Equivalent Stratospheric Chlorine times Volume of Stratospheric Polar Clouds) and volcanic eruptions (such as El Chichon and Pinatubo).

  20. Status and trends of land change in the United States--1973 to 2000

    USGS Publications Warehouse

    ,

    2012-01-01

    U.S. Geological Survey (USGS) Professional Paper 1794 is a four-volume series on the status and trends of the Nation’s land use and land cover, providing an assessment of the rates and causes of land-use and land-cover change in the United States between 1973 and 2000. Volumes A, B, C, and D provide analyses for the Western United States, the Great Plains, the Midwest–South Central United States, and the Eastern United States, respectively. The assessments of land-use and land-cover trends are conducted on an ecoregion-by-ecoregion basis, and each ecoregion assessment is guided by a nationally consistent study design that includes mapping, statistical methods, field studies, and analysis. Individual assessments provide a picture of the characteristics of land change occurring in a given ecoregion; in combination, they provide a framework for understanding the complex national mosaic of change and also the causes and consequences of change. Thus, each volume in this series provides a regional assessment of how (and how fast) land use and land cover are changing, and why. The four volumes together form the first comprehensive picture of land change across the Nation. This report is only one of the products produced by USGS on land-use and land-cover change in the United States. Other reports and land-cover statistics are available online at http://landcovertrends.usgs.gov.

Top