Science.gov

Sample records for addition statistical analyses

  1. [Clinical research=design*measurements*statistical analyses].

    PubMed

    Furukawa, Toshiaki

    2012-06-01

    A clinical study must address true endpoints that matter for the patients and the doctors. A good clinical study starts with a good clinical question. Formulating a clinical question in the form of PECO can sharpen one's original question. In order to perform a good clinical study one must have a knowledge of study design, measurements and statistical analyses: The first is taught by epidemiology, the second by psychometrics and the third by biostatistics.

  2. Statistical analyses of the relative risk.

    PubMed Central

    Gart, J J

    1979-01-01

    Let P1 be the probability of a disease in one population and P2 be the probability of a disease in a second population. The ratio of these quantities, R = P1/P2, is termed the relative risk. We consider first the analyses of the relative risk from retrospective studies. The relation between the relative risk and the odds ratio (or cross-product ratio) is developed. The odds ratio can be considered a parameter of an exponential model possessing sufficient statistics. This permits the development of exact significance tests and confidence intervals in the conditional space. Unconditional tests and intervals are also considered briefly. The consequences of misclassification errors and ignoring matching or stratifying are also considered. The various methods are extended to combination of results over the strata. Examples of case-control studies testing the association between HL-A frequencies and cancer illustrate the techniques. The parallel analyses of prospective studies are given. If P1 and P2 are small with large samples sizes the appropriate model is a Poisson distribution. This yields a exponential model with sufficient statistics. Exact conditional tests and confidence intervals can then be developed. Here we consider the case where two populations are compared adjusting for sex differences as well as for the strata (or covariate) differences such as age. The methods are applied to two examples: (1) testing in the two sexes the ratio of relative risks of skin cancer in people living in different latitudes, and (2) testing over time the ratio of the relative risks of cancer in two cities, one of which fluoridated its drinking water and one which did not. PMID:540589

  3. DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTICAL ANALYSES

    EPA Science Inventory

    The Environmental Monitoring and Assessment Program (EMAP) collects data that are used to statistically assess the environmental condition of large geographic regions. These data are then posted on the EMAP web site so that anyone can use them. Databases used for the statistical ...

  4. Insights into Corona Formation Through Statistical Analyses

    NASA Technical Reports Server (NTRS)

    Glaze, L. S.; Stofan, E. R.; Smrekar, S. E.; Baloga, S. M.

    2002-01-01

    Statistical analysis of an expanded database of coronae on Venus indicates that the populations of Type 1 (with fracture annuli) and 2 (without fracture annuli) corona diameters are statistically indistinguishable, and therefore we have no basis for assuming different formation mechanisms. Analysis of the topography and diameters of coronae shows that coronae that are depressions, rimmed depressions, and domes tend to be significantly smaller than those that are plateaus, rimmed plateaus, or domes with surrounding rims. This is consistent with the model of Smrekar and Stofan and inconsistent with predictions of the spreading drop model of Koch and Munga. The diameter range for domes, the initial stage of corona formation, provides a broad constraint on the buoyancy of corona-forming plumes. Coronae are only slightly more likely to be topographically raised than depressions, with Type 1 coronae most frequently occurring as rimmed depressions and Type 2 coronae most frequently occurring with flat interiors and raised rims. Most Type 1 coronae are located along chasmata systems or fracture belts, while Type 2 coronae are found predominantly as isolated features in the plains. Coronae at hot spot rises tend to be significantly lager than coronae in other settings, consistent with a hotter upper mantle at hot spot rises and their active state.

  5. Insights into Corona Formation through Statistical Analyses

    NASA Technical Reports Server (NTRS)

    Glaze, L. S.; Stofan, E. R.; Smrekar, S. E.; Baloga, S. M.

    2002-01-01

    Statistical analysis of an expanded database of coronae on Venus indicates that the populations of Type 1 (with fracture annuli) and 2 (without fracture annuli) corona diameters are statistically indistinguishable, and therefore we have no basis for assuming different formation mechanisms. Analysis of the topography and diameters of coronae shows that coronae that are depressions, rimmed depressions, and domes tend to be significantly smaller than those that are plateaus, rimmed plateaus, or domes with surrounding rims. This is consistent with the model of Smrekar and Stofan and inconsistent with predictions of the spreading drop model of Koch and Manga. The diameter range for domes, the initial stage of corona formation, provides a broad constraint on the buoyancy of corona-forming plumes. Coronae are only slightly more likely to be topographically raised than depressions, with Type 1 coronae most frequently occurring as rimmed depressions and Type 2 coronae most frequently occuring with flat interiors and raised rims. Most Type 1 coronae are located along chasmata systems or fracture belts, while Type 2 coronas are found predominantly as isolated features in the plains. Coronae at hotspot rises tend to be significantly larger than coronae in other settings, consistent with a hotter upper mantle at hotspot rises and their active state.

  6. A comparison of artificial neural networks and statistical analyses

    SciTech Connect

    Blough, D.K.; Anderson, K.K.

    1994-01-01

    Artificial neural networks have come to be used in a wide variety of data analytic applications, many of which were traditionally approached using statistical methods. It is the purpose of this paper to discuss the nature of the information obtained by each methodology, that of artificial neural networks and that of statistical analyses. Two aspects of the comparison will be considered: (1) what are the requirements needed for each approach in terms of model specification, data requirements, and computing power, and (2) what sort of information is contained in the results of each approach. Example analyses are presented characterizing the differences in the two approaches. A specific problem (hydrodynamic yield estimation) is presented with a corresponding data set. This data is then analyzed using statistical methods, and the results are compared with those obtained by using an artificial neural network. The requirements and results of the two approaches are then summarized as general guidelines an investigator can use in deciding which approach would be best for analyzing a given data set.

  7. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth

    ERIC Educational Resources Information Center

    Steyvers, Mark; Tenenbaum, Joshua B.

    2005-01-01

    We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…

  8. Weak additivity principle for current statistics in d dimensions.

    PubMed

    Pérez-Espigares, C; Garrido, P L; Hurtado, P I

    2016-04-01

    The additivity principle (AP) allows one to compute the current distribution in many one-dimensional nonequilibrium systems. Here we extend this conjecture to general d-dimensional driven diffusive systems, and validate its predictions against both numerical simulations of rare events and microscopic exact calculations of three paradigmatic models of diffusive transport in d=2. Crucially, the existence of a structured current vector field at the fluctuating level, coupled to the local mobility, turns out to be essential to understand current statistics in d>1. We prove that, when compared to the straightforward extension of the AP to high d, the so-called weak AP always yields a better minimizer of the macroscopic fluctuation theory action for current statistics.

  9. Statistical analyses support power law distributions found in neuronal avalanches.

    PubMed

    Klaus, Andreas; Yu, Shan; Plenz, Dietmar

    2011-01-01

    The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to -1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i) analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii) model parameter estimation to determine the specific exponent of the power law, and (iii) comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling ("finite size" effect). This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to -1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex.

  10. A weighted U statistic for association analyses considering genetic heterogeneity.

    PubMed

    Wei, Changshuai; Elston, Robert C; Lu, Qing

    2016-07-20

    Converging evidence suggests that common complex diseases with the same or similar clinical manifestations could have different underlying genetic etiologies. While current research interests have shifted toward uncovering rare variants and structural variations predisposing to human diseases, the impact of heterogeneity in genetic studies of complex diseases has been largely overlooked. Most of the existing statistical methods assume the disease under investigation has a homogeneous genetic effect and could, therefore, have low power if the disease undergoes heterogeneous pathophysiological and etiological processes. In this paper, we propose a heterogeneity-weighted U (HWU) method for association analyses considering genetic heterogeneity. HWU can be applied to various types of phenotypes (e.g., binary and continuous) and is computationally efficient for high-dimensional genetic data. Through simulations, we showed the advantage of HWU when the underlying genetic etiology of a disease was heterogeneous, as well as the robustness of HWU against different model assumptions (e.g., phenotype distributions). Using HWU, we conducted a genome-wide analysis of nicotine dependence from the Study of Addiction: Genetics and Environments dataset. The genome-wide analysis of nearly one million genetic markers took 7h, identifying heterogeneous effects of two new genes (i.e., CYP3A5 and IKBKB) on nicotine dependence. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Using a Five-Step Procedure for Inferential Statistical Analyses

    ERIC Educational Resources Information Center

    Kamin, Lawrence F.

    2010-01-01

    Many statistics texts pose inferential statistical problems in a disjointed way. By using a simple five-step procedure as a template for statistical inference problems, the student can solve problems in an organized fashion. The problem and its solution will thus be a stand-by-itself organic whole and a single unit of thought and effort. The…

  12. Testing of hypotheses about altitude decompression sickness by statistical analyses

    NASA Technical Reports Server (NTRS)

    Van Liew, H. D.; Burkard, M. E.; Conkin, J.; Powell, M. R. (Principal Investigator)

    1996-01-01

    This communication extends a statistical analysis of forced-descent decompression sickness at altitude in exercising subjects (J Appl Physiol 1994; 76:2726-2734) with a data subset having an additional explanatory variable, rate of ascent. The original explanatory variables for risk-function analysis were environmental pressure of the altitude, duration of exposure, and duration of pure-O2 breathing before exposure; the best fit was consistent with the idea that instantaneous risk increases linearly as altitude exposure continues. Use of the new explanatory variable improved the fit of the smaller data subset, as indicated by log likelihood. Also, with ascent rate accounted for, replacement of the term for linear accrual of instantaneous risk by a term for rise and then decay made a highly significant improvement upon the original model (log likelihood increased by 37 log units). The authors conclude that a more representative data set and removal of the variability attributable to ascent rate allowed the rise-and-decay mechanism, which is expected from theory and observations, to become manifest.

  13. Cucumis monosomic alien addition lines: morphological, cytological, and genotypic analyses.

    PubMed

    Chen, Jin-Feng; Luo, Xiang-Dong; Qian, Chun-Tao; Jahn, Molly M; Staub, Jack E; Zhuang, Fei-Yun; Lou, Qun-Feng; Ren, Gang

    2004-05-01

    Cucumis hystrix Chakr. (HH, 2n=24), a wild relative of the cultivated cucumber, possesses several potentially valuable disease-resistance and abiotic stress-tolerance traits for cucumber ( C. sativus L., CC, 2n=14) improvement. Numerous attempts have been made to transfer desirable traits since the successful interspecific hybridization between C. hystrix and C. sativus, one of which resulted in the production of an allotriploid (HCC, 2n=26: one genome of C. hystrix and two of C. sativus). When this genotype was treated with colchicine to induce polyploidy, two monosomic alien addition lines (MAALs) (plant nos. 87 and 517: 14 CC+1 H, 2n=15) were recovered among 252 viable plants. Each of these plants was morphologically distinct from allotriploids and cultivated cucumbers. Cytogenetic and molecular marker analyses were performed to confirm the genetic constitution and further characterize these two MAALs. Chromosome counts made from at least 30 meristematic cells from each plant confirmed 15 nuclear chromosomes. In pollen mother cells of plant nos. 87 and 517, seven bivalents and one univalent were observed at diakinesis and metaphase I; the frequency of trivalent formation was low (about 4-5%). At anaphase I and II, stochastic and asymmetric division led to the formation of two gamete classes: n=7 and n=8; however, pollen fertility was relatively high. Pollen stainability in plant no. 87 was 86.7% and in plant no. 517 was 93.2%. Random amplified polymorphic DNA analysis was performed using 100 random 10-base primers. Genotypes obtained with eight primers (A-9, A-11, AH-13, AI-19, AJ-18, AJ-20, E-19, and N-20) showed a band common to the two MAAL plants and C. hystrix that was absent in C. sativus, confirming that the alien chromosomes present in the MAALs were derived from C. hystrix. Morphological differences and differences in banding patterns were also observed between plant nos. 87 and 517 after amplification with primers AI-5, AJ-13, N-12, and N-20

  14. Study design, methodology and statistical analyses in the clinical development of sparfloxacin.

    PubMed

    Genevois, E; Lelouer, V; Vercken, J B; Caillon, R

    1996-05-01

    Many publications in the past 10 years have emphasised the difficulties of evaluating anti-infective drugs and the need for well-designed clinical trials in this therapeutic field. The clinical development of sparfloxacin in Europe, involving more than 4000 patients in ten countries, provided the opportunity to implement a methodology for evaluation and statistical analyses which would take into account actual requirements and past insufficiencies. This methodology focused on a rigorous and accurate patient classification for evaluability, subgroups of particular interest, efficacy assessment based on automation (algorithm) and individual case review by expert panel committees. In addition, the statistical analyses did not use significance testing but rather confidence intervals to determine whether sparfloxacin was therapeutically equivalent to the reference comparator antibacterial agents. PMID:8737126

  15. Study design, methodology and statistical analyses in the clinical development of sparfloxacin.

    PubMed

    Genevois, E; Lelouer, V; Vercken, J B; Caillon, R

    1996-05-01

    Many publications in the past 10 years have emphasised the difficulties of evaluating anti-infective drugs and the need for well-designed clinical trials in this therapeutic field. The clinical development of sparfloxacin in Europe, involving more than 4000 patients in ten countries, provided the opportunity to implement a methodology for evaluation and statistical analyses which would take into account actual requirements and past insufficiencies. This methodology focused on a rigorous and accurate patient classification for evaluability, subgroups of particular interest, efficacy assessment based on automation (algorithm) and individual case review by expert panel committees. In addition, the statistical analyses did not use significance testing but rather confidence intervals to determine whether sparfloxacin was therapeutically equivalent to the reference comparator antibacterial agents.

  16. SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit

    PubMed Central

    Chu, Annie; Cui, Jenny; Dinov, Ivo D.

    2011-01-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most

  17. Automated Evaluation of Medical Software Usage: Algorithm and Statistical Analyses.

    PubMed

    Cao, Ming; Chen, Yong; Zhu, Min; Zhang, Jiajie

    2015-01-01

    Evaluating the correctness of medical software usage is critically important in healthcare system management. Turf [1] is a software that can effectively collect interactions between user and computer. In this paper, we propose an algorithm to compare the recorded human-computer interaction events with a predefined path. Based on the pass/fail results, statistical analysis methods are proposed for two applications: to identify training effects and to compare products of the same functionality.

  18. Statistical Analyses of Hydrophobic Interactions: A Mini-Review.

    PubMed

    Pratt, Lawrence R; Chaudhari, Mangesh I; Rempe, Susan B

    2016-07-14

    This review focuses on the striking recent progress in solving for hydrophobic interactions between small inert molecules. We discuss several new understandings. First, the inverse temperature phenomenology of hydrophobic interactions, i.e., strengthening of hydrophobic bonds with increasing temperature, is decisively exhibited by hydrophobic interactions between atomic-scale hard sphere solutes in water. Second, inclusion of attractive interactions associated with atomic-size hydrophobic reference cases leads to substantial, nontrivial corrections to reference results for purely repulsive solutes. Hydrophobic bonds are weakened by adding solute dispersion forces to treatment of reference cases. The classic statistical mechanical theory for those corrections is not accurate in this application, but molecular quasi-chemical theory shows promise. Finally, because of the masking roles of excluded volume and attractive interactions, comparisons that do not discriminate the different possibilities face an interpretive danger. PMID:27258151

  19. Scripts for TRUMP data analyses. Part II (HLA-related data): statistical analyses specific for hematopoietic stem cell transplantation.

    PubMed

    Kanda, Junya

    2016-01-01

    The Transplant Registry Unified Management Program (TRUMP) made it possible for members of the Japan Society for Hematopoietic Cell Transplantation (JSHCT) to analyze large sets of national registry data on autologous and allogeneic hematopoietic stem cell transplantation. However, as the processes used to collect transplantation information are complex and differed over time, the background of these processes should be understood when using TRUMP data. Previously, information on the HLA locus of patients and donors had been collected using a questionnaire-based free-description method, resulting in some input errors. To correct minor but significant errors and provide accurate HLA matching data, the use of a Stata or EZR/R script offered by the JSHCT is strongly recommended when analyzing HLA data in the TRUMP dataset. The HLA mismatch direction, mismatch counting method, and different impacts of HLA mismatches by stem cell source are other important factors in the analysis of HLA data. Additionally, researchers should understand the statistical analyses specific for hematopoietic stem cell transplantation, such as competing risk, landmark analysis, and time-dependent analysis, to correctly analyze transplant data. The data center of the JSHCT can be contacted if statistical assistance is required.

  20. Diatremes of the Hopi Buttes, Arizona; chemical and statistical analyses

    USGS Publications Warehouse

    Wenrich, K.J.; Mascarenas, J.F.

    1982-01-01

    Lacustrine sediments deposited in maar lakes of the Hopi Buttes diatremes are hosts for uranium mineralization of as much as 1500 ppm. The monchiquites and limburgite turfs erupted from the diatremes are distinguished from normal alkalic basalts of the Colorado Plateau by their extreme silica undersaturation and high water, TiO2, and P2O5 contents. Many trace elements are also unusually abundant, including Ag, As, Ba, Be, Ce, Dy, Eu, F, Gd, Hf, La, Nd, Pb, Rb, Se, Sm, Sn, Sr, Ta, Tb, Th, U, V, Zn, and Zr. The lacustrine sediments, which consist predominantly of travertine and clastic rocks, are the hosts for syngenetic and epigenetic uranium mineralization of as much as 1500 ppm uranium. Fission track maps show the uranium to be disseminated within the travertine and clastic rocks, and although microprobe analyses have not, as yet, revealed discrete uranium-bearing phases, the clastic rocks show a correlation of high Fe, Ti, and P with areas of high U. Correlation coefficients show that for the travertines, clastics, and limburgite ruffs, Mo, As, Sr, Co, and V appear to have the most consistent and strongest correlations with uranium. Many elements, including many of the rare-earth elements, that are high in these three rocks are also high in the monchiquites, as compared to the average crustal abundance for the respective rock type. This similar suite of anomalous elements, which includes such immobile elements as the rare earths, suggests that Fluids which deposited the travertines were related to the monchiquitic magma. The similar age of about 5 m.y. for both the lake beds and the monchiquites also appears to support this source for the mineralizing fluids.

  1. Statistical Analyses of Historical Rock Falls in Yosemite National Park

    NASA Astrophysics Data System (ADS)

    Austin, L. J.; Stock, G. M.; Collins, B. D.

    2014-12-01

    The steep cliffs of Yosemite Valley produce dozens of rock falls each year that pose a hazard to the four million annual visitors to Yosemite National Park. To better understand rock-fall processes, we use 156 years of rock fall data to examine temporal and spatial correlations between rock falls and seasonality, environmental conditions, and rock type. We also investigate the complexity of rock fall triggers, the most notably precipitation-related triggers (precipitation, snowmelt, rain-on-snow), earthquakes, thermal stress, and freeze-thaw. Comparing rock fall occurrences and cumulative precipitation plots for 16 years between 1983 and 2011 demonstrates a temporal correlation between precipitation and rock falls; this is corroborated by the observation of 17 rock falls that occurred within a one-week period of significant precipitation in late winter 2014. Yet there are many rock falls that do not coincide with precipitation; we attribute these to other triggers when clear temporal correlations exist, or, lacking clear temporal correlations, we investigate possible factors involved in "unrecognized" triggers. The large number of rock falls in the database (925) affords the opportunity to establish a volume-frequency relation similar to that of earthquakes. We also investigate both the frequency and volume of rock falls for each significant lithologic unit exposed in Yosemite Valley and find that two units in particular, the granodiorite of Arch Rock and the granodiorite of Kuna Crest, produce higher rates of rock fall relative to the other lithologies. The aim of these analyses is to better understand conditions that contribute to rock fall and to increase understanding of rock-fall hazard within Yosemite National Park.

  2. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    ERIC Educational Resources Information Center

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  3. Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures

    SciTech Connect

    Udey, Ruth Norma

    2013-01-01

    Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.

  4. Occupational and Educational Structures of the Labor Force and Levels of Economic Development: Further Analyses and Statistical Data.

    ERIC Educational Resources Information Center

    Jallade, Jean-Pierre; And Others

    The volume is the second of two and presents additional statistical analyses of data discussed in the first, which presented 1960 and 1961 census data from 53 countries in an attempt to identify and quantify factors which determined the occupational and educational structure of the labor force. The second volume consists of eight chapters: (1) a…

  5. RooStatsCms: a tool for analyses modelling, combination and statistical studies

    NASA Astrophysics Data System (ADS)

    Piparo, D.; Schott, G.; Quast, G.

    2009-12-01

    The RooStatsCms (RSC) software framework allows analysis modelling and combination, statistical studies together with the access to sophisticated graphics routines for results visualisation. The goal of the project is to complement the existing analyses by means of their combination and accurate statistical studies.

  6. Statistical Analyses of Raw Material Data for MTM45-1/CF7442A-36% RW: CMH Cure Cycle

    NASA Technical Reports Server (NTRS)

    Coroneos, Rula; Pai, Shantaram, S.; Murthy, Pappu

    2013-01-01

    This report describes statistical characterization of physical properties of the composite material system MTM45-1/CF7442A, which has been tested and is currently being considered for use on spacecraft structures. This composite system is made of 6K plain weave graphite fibers in a highly toughened resin system. This report summarizes the distribution types and statistical details of the tests and the conditions for the experimental data generated. These distributions will be used in multivariate regression analyses to help determine material and design allowables for similar material systems and to establish a procedure for other material systems. Additionally, these distributions will be used in future probabilistic analyses of spacecraft structures. The specific properties that are characterized are the ultimate strength, modulus, and Poisson??s ratio by using a commercially available statistical package. Results are displayed using graphical and semigraphical methods and are included in the accompanying appendixes.

  7. Statistical Reform: Evidence-Based Practice, Meta-Analyses, and Single Subject Designs

    ERIC Educational Resources Information Center

    Jenson, William R.; Clark, Elaine; Kircher, John C.; Kristjansson, Sean D.

    2007-01-01

    Evidence-based practice approaches to interventions has come of age and promises to provide a new standard of excellence for school psychologists. This article describes several definitions of evidence-based practice and the problems associated with traditional statistical analyses that rely on rejection of the null hypothesis for the…

  8. Selected hydrographs and statistical analyses characterizing the water resources of the Arkansas River basin, Colorado

    USGS Publications Warehouse

    Burns, A.W.

    1985-01-01

    Hydrographs of annual precipitation from 30 stations, April 1 snowpack readings from 18 snow-survey courses, annual discharge from 46 streamflow gaging stations, and instantaneous water levels from 37 wells are presented to illustrate the temporal and spatial variability of the water resources of the Arkansas River basin in Colorado. Statistical analyses indicate no apparent time trends in annual precipitation or April 1 snowpack , but they do indicate declines in annual discharge for locations in the eastern part of the basin. A composite hydrograph indicates a negligible change in groundwater levels between 1930 and 1980 in the alluvial aquifer downstream from Pueblo. Generally poor correlation occurs between precipitation data and snowpack data (less than 0.40 for monthly data and less than 0.61 for annual data). In addition, precipitation data did not correlate very well with discharge (less than 0.57 for monthly data), leading to the conclusion that the typical streamflow conditions are affected little by direct precipitation. Main-stem discharge correlates quite well with snow-pack (as much as 0.85 for annual data), indicating its dependence on snowmelt runoff. (USGS)

  9. Statistical analyses on the pattern of food consumption and digestive-tract cancers in Japan.

    PubMed

    Hara, N; Sakata, K; Nagai, M; Fujita, Y; Hashimoto, T; Yanagawa, H

    1984-01-01

    The relationships between areal differences in mortality from six digestive-tract cancers and consumption of selected foods in 46 of the 47 Japanese prefectures (Okinawa being excluded) were analyzed. Statistical analyses disclosed that the groups of foods positively associated with cancer death were as follows: for esophageal cancer, pork, oil, popular-grade sake, and green tea; for stomach cancer, fresh fish, salted or dried fish, salt, and special-grade sake; for colon cancer, bread, milk, butter, margarine, ketchup, beer, and salted or dried fish; for rectal cancer, fresh fish, salted or dried fish, salt, and popular-grade sake; for cancer of the biliary passages, pork, popular-grade sake, and green tea; and for pancreatic cancer, oil, mayonnaise, fresh fish, and salted or dried fish. These results are based on statistical analyses. Further epidemiological analyses are required to find a biological causal relationship. PMID:6545578

  10. The use of verb information in parsing: different statistical analyses lead to contradictory conclusions.

    PubMed

    Kennison, Shelia M

    2009-08-01

    The research investigated how comprehenders use verb information during syntactic parsing. Two reading experiments investigated the relationship between verb-specific variables and reading time. These experiments were close replications of prior work; however, two statistical techniques were used, rather than one. These were item-by-item correlations and participant-by-participant regression. In Experiment 1, reading time was measured using a self-paced moving window. In Experiment 2, eye movements were recorded during reading. The results of both experiments showed that the results of two types of statistical analyses support contradictory conclusions. The analyses involving participant-by-participant regression analyses provided no evidence for the early use of verb information in parsing and support syntax-first approaches to parsing. In contrast, the results of item-by-item correlation were consistent with the prior research, supporting the view that verb information can guide initial parsing decisions. Implications for theories of parsing are discussed.

  11. Statistical tests for analysing directed movement of self-organising animal groups.

    PubMed

    Merrifield, A; Myerscough, Mary R; Weber, N

    2006-09-01

    We discuss some theory concerning directional data and introduce a suite of statistical tools that researchers interested in the directional movement of animal groups can use to analyse results from their models. We illustrate these tools by analysing the results of a model of groups moving under the duress of certain informed indistinguishable individuals, that arises in the context of honeybee (Apis mellifera) swarming behaviour. We modify an existing model of collective motion, based on inter-individual social interactions, allowing knowledgeable individuals to guide group members to the goal by travelling through the group in a direct line aligned with the goal direction.

  12. Statistical analyses to support forensic interpretation for a new ten-locus STR profiling system.

    PubMed

    Foreman, L A; Evett, I W

    2001-01-01

    A new ten-locus STR (short tandem repeat) profiling system was recently introduced into casework by the Forensic Science Service (FSS) and statistical analyses are described here based on data collected using this new system for the three major racial groups of the UK: Caucasian. Afro-Caribbean and Asian (of Indo-Pakistani descent). Allele distributions are compared and the FSS position with regard to routine significance testing of DNA frequency databases is discussed. An investigation of match probability calculations is carried out and the consequent analyses are shown to provide support for proposed changes in how the FSS reports DNA results when very small match probabilities are involved.

  13. A new statistical method for design and analyses of component tolerance

    NASA Astrophysics Data System (ADS)

    Movahedi, Mohammad Mehdi; Khounsiavash, Mohsen; Otadi, Mahmood; Mosleh, Maryam

    2016-09-01

    Tolerancing conducted by design engineers to meet customers' needs is a prerequisite for producing high-quality products. Engineers use handbooks to conduct tolerancing. While use of statistical methods for tolerancing is not something new, engineers often use known distributions, including the normal distribution. Yet, if the statistical distribution of the given variable is unknown, a new statistical method will be employed to design tolerance. In this paper, we use generalized lambda distribution for design and analyses component tolerance. We use percentile method (PM) to estimate the distribution parameters. The findings indicated that, when the distribution of the component data is unknown, the proposed method can be used to expedite the design of component tolerance. Moreover, in the case of assembled sets, more extensive tolerance for each component with the same target performance can be utilized.

  14. Radiation Induced Chromatin Conformation Changes Analysed by Fluorescent Localization Microscopy, Statistical Physics, and Graph Theory

    PubMed Central

    Müller, Patrick; Hillebrandt, Sabina; Krufczik, Matthias; Bach, Margund; Kaufmann, Rainer; Hausmann, Michael; Heermann, Dieter W.

    2015-01-01

    It has been well established that the architecture of chromatin in cell nuclei is not random but functionally correlated. Chromatin damage caused by ionizing radiation raises complex repair machineries. This is accompanied by local chromatin rearrangements and structural changes which may for instance improve the accessibility of damaged sites for repair protein complexes. Using stably transfected HeLa cells expressing either green fluorescent protein (GFP) labelled histone H2B or yellow fluorescent protein (YFP) labelled histone H2A, we investigated the positioning of individual histone proteins in cell nuclei by means of high resolution localization microscopy (Spectral Position Determination Microscopy = SPDM). The cells were exposed to ionizing radiation of different doses and aliquots were fixed after different repair times for SPDM imaging. In addition to the repair dependent histone protein pattern, the positioning of antibodies specific for heterochromatin and euchromatin was separately recorded by SPDM. The present paper aims to provide a quantitative description of structural changes of chromatin after irradiation and during repair. It introduces a novel approach to analyse SPDM images by means of statistical physics and graph theory. The method is based on the calculation of the radial distribution functions as well as edge length distributions for graphs defined by a triangulation of the marker positions. The obtained results show that through the cell nucleus the different chromatin re-arrangements as detected by the fluorescent nucleosomal pattern average themselves. In contrast heterochromatic regions alone indicate a relaxation after radiation exposure and re-condensation during repair whereas euchromatin seemed to be unaffected or behave contrarily. SPDM in combination with the analysis techniques applied allows the systematic elucidation of chromatin re-arrangements after irradiation and during repair, if selected sub-regions of nuclei are

  15. Radiation induced chromatin conformation changes analysed by fluorescent localization microscopy, statistical physics, and graph theory.

    PubMed

    Zhang, Yang; Máté, Gabriell; Müller, Patrick; Hillebrandt, Sabina; Krufczik, Matthias; Bach, Margund; Kaufmann, Rainer; Hausmann, Michael; Heermann, Dieter W

    2015-01-01

    It has been well established that the architecture of chromatin in cell nuclei is not random but functionally correlated. Chromatin damage caused by ionizing radiation raises complex repair machineries. This is accompanied by local chromatin rearrangements and structural changes which may for instance improve the accessibility of damaged sites for repair protein complexes. Using stably transfected HeLa cells expressing either green fluorescent protein (GFP) labelled histone H2B or yellow fluorescent protein (YFP) labelled histone H2A, we investigated the positioning of individual histone proteins in cell nuclei by means of high resolution localization microscopy (Spectral Position Determination Microscopy = SPDM). The cells were exposed to ionizing radiation of different doses and aliquots were fixed after different repair times for SPDM imaging. In addition to the repair dependent histone protein pattern, the positioning of antibodies specific for heterochromatin and euchromatin was separately recorded by SPDM. The present paper aims to provide a quantitative description of structural changes of chromatin after irradiation and during repair. It introduces a novel approach to analyse SPDM images by means of statistical physics and graph theory. The method is based on the calculation of the radial distribution functions as well as edge length distributions for graphs defined by a triangulation of the marker positions. The obtained results show that through the cell nucleus the different chromatin re-arrangements as detected by the fluorescent nucleosomal pattern average themselves. In contrast heterochromatic regions alone indicate a relaxation after radiation exposure and re-condensation during repair whereas euchromatin seemed to be unaffected or behave contrarily. SPDM in combination with the analysis techniques applied allows the systematic elucidation of chromatin re-arrangements after irradiation and during repair, if selected sub-regions of nuclei are

  16. Statistical Analyses of Second Indoor Bio-Release Field Evaluation Study at Idaho National Laboratory

    SciTech Connect

    Amidan, Brett G.; Pulsipher, Brent A.; Matzke, Brett D.

    2009-12-17

    In September 2008 a large-scale testing operation (referred to as the INL-2 test) was performed within a two-story building (PBF-632) at the Idaho National Laboratory (INL). The report “Operational Observations on the INL-2 Experiment” defines the seven objectives for this test and discusses the results and conclusions. This is further discussed in the introduction of this report. The INL-2 test consisted of five tests (events) in which a floor (level) of the building was contaminated with the harmless biological warfare agent simulant Bg and samples were taken in most, if not all, of the rooms on the contaminated floor. After the sampling, the building was decontaminated, and the next test performed. Judgmental samples and probabilistic samples were determined and taken during each test. Vacuum, wipe, and swab samples were taken within each room. The purpose of this report is to study an additional four topics that were not within the scope of the original report. These topics are: 1) assess the quantitative assumptions about the data being normally or log-normally distributed; 2) evaluate differences and quantify the sample to sample variability within a room and across the rooms; 3) perform geostatistical types of analyses to study spatial correlations; and 4) quantify the differences observed between surface types and sampling methods for each scenario and study the consistency across the scenarios. The following four paragraphs summarize the results of each of the four additional analyses. All samples after decontamination came back negative. Because of this, it was not appropriate to determine if these clearance samples were normally distributed. As Table 1 shows, the characterization data consists of values between and inclusive of 0 and 100 CFU/cm2 (100 was the value assigned when the number is too numerous to count). The 100 values are generally much bigger than the rest of the data, causing the data to be right skewed. There are also a significant

  17. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    SciTech Connect

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.

  18. Exploratory study on a statistical method to analyse time resolved data obtained during nanomaterial exposure measurements

    NASA Astrophysics Data System (ADS)

    Clerc, F.; Njiki-Menga, G.-H.; Witschger, O.

    2013-04-01

    Most of the measurement strategies that are suggested at the international level to assess workplace exposure to nanomaterials rely on devices measuring, in real time, airborne particles concentrations (according different metrics). Since none of the instruments to measure aerosols can distinguish a particle of interest to the background aerosol, the statistical analysis of time resolved data requires special attention. So far, very few approaches have been used for statistical analysis in the literature. This ranges from simple qualitative analysis of graphs to the implementation of more complex statistical models. To date, there is still no consensus on a particular approach and the current period is always looking for an appropriate and robust method. In this context, this exploratory study investigates a statistical method to analyse time resolved data based on a Bayesian probabilistic approach. To investigate and illustrate the use of the this statistical method, particle number concentration data from a workplace study that investigated the potential for exposure via inhalation from cleanout operations by sandpapering of a reactor producing nanocomposite thin films have been used. In this workplace study, the background issue has been addressed through the near-field and far-field approaches and several size integrated and time resolved devices have been used. The analysis of the results presented here focuses only on data obtained with two handheld condensation particle counters. While one was measuring at the source of the released particles, the other one was measuring in parallel far-field. The Bayesian probabilistic approach allows a probabilistic modelling of data series, and the observed task is modelled in the form of probability distributions. The probability distributions issuing from time resolved data obtained at the source can be compared with the probability distributions issuing from the time resolved data obtained far-field, leading in a

  19. Testing for Additivity at Select Mixture Groups of Interest Based on Statistical Equivalence Testing Methods

    SciTech Connect

    Stork, LeAnna M.; Gennings, Chris; Carchman, Richard; Carter, Jr., Walter H.; Pounds, Joel G.; Mumtaz, Moiz

    2006-12-01

    Several assumptions, defined and undefined, are used in the toxicity assessment of chemical mixtures. In scientific practice mixture components in the low-dose region, particularly subthreshold doses, are often assumed to behave additively (i.e., zero interaction) based on heuristic arguments. This assumption has important implications in the practice of risk assessment, but has not been experimentally tested. We have developed methodology to test for additivity in the sense of Berenbaum (Advances in Cancer Research, 1981), based on the statistical equivalence testing literature where the null hypothesis of interaction is rejected for the alternative hypothesis of additivity when data support the claim. The implication of this approach is that conclusions of additivity are made with a false positive rate controlled by the experimenter. The claim of additivity is based on prespecified additivity margins, which are chosen using expert biological judgment such that small deviations from additivity, which are not considered to be biologically important, are not statistically significant. This approach is in contrast to the usual hypothesis-testing framework that assumes additivity in the null hypothesis and rejects when there is significant evidence of interaction. In this scenario, failure to reject may be due to lack of statistical power making the claim of additivity problematic. The proposed method is illustrated in a mixture of five organophosphorus pesticides that were experimentally evaluated alone and at relevant mixing ratios. Motor activity was assessed in adult male rats following acute exposure. Four low-dose mixture groups were evaluated. Evidence of additivity is found in three of the four low-dose mixture groups.The proposed method tests for additivity of the whole mixture and does not take into account subset interactions (e.g., synergistic, antagonistic) that may have occurred and cancelled each other out.

  20. A guide to statistical analysis in microbial ecology: a community-focused, living review of multivariate data analyses.

    PubMed

    Buttigieg, Pier Luigi; Ramette, Alban

    2014-12-01

    The application of multivariate statistical analyses has become a consistent feature in microbial ecology. However, many microbial ecologists are still in the process of developing a deep understanding of these methods and appreciating their limitations. As a consequence, staying abreast of progress and debate in this arena poses an additional challenge to many microbial ecologists. To address these issues, we present the GUide to STatistical Analysis in Microbial Ecology (GUSTA ME): a dynamic, web-based resource providing accessible descriptions of numerous multivariate techniques relevant to microbial ecologists. A combination of interactive elements allows users to discover and navigate between methods relevant to their needs and examine how they have been used by others in the field. We have designed GUSTA ME to become a community-led and -curated service, which we hope will provide a common reference and forum to discuss and disseminate analytical techniques relevant to the microbial ecology community.

  1. A guide to statistical analysis in microbial ecology: a community-focused, living review of multivariate data analyses.

    PubMed

    Buttigieg, Pier Luigi; Ramette, Alban

    2014-12-01

    The application of multivariate statistical analyses has become a consistent feature in microbial ecology. However, many microbial ecologists are still in the process of developing a deep understanding of these methods and appreciating their limitations. As a consequence, staying abreast of progress and debate in this arena poses an additional challenge to many microbial ecologists. To address these issues, we present the GUide to STatistical Analysis in Microbial Ecology (GUSTA ME): a dynamic, web-based resource providing accessible descriptions of numerous multivariate techniques relevant to microbial ecologists. A combination of interactive elements allows users to discover and navigate between methods relevant to their needs and examine how they have been used by others in the field. We have designed GUSTA ME to become a community-led and -curated service, which we hope will provide a common reference and forum to discuss and disseminate analytical techniques relevant to the microbial ecology community. PMID:25314312

  2. Understanding of Statistical Terms Routinely Used in Meta-Analyses: An International Survey among Researchers

    PubMed Central

    Mavros, Michael N.; Alexiou, Vangelis G.; Vardakas, Konstantinos Z.; Falagas, Matthew E.

    2013-01-01

    Objective Biomedical literature is increasingly enriched with literature reviews and meta-analyses. We sought to assess the understanding of statistical terms routinely used in such studies, among researchers. Methods An online survey posing 4 clinically-oriented multiple-choice questions was conducted in an international sample of randomly selected corresponding authors of articles indexed by PubMed. Results A total of 315 unique complete forms were analyzed (participation rate 39.4%), mostly from Europe (48%), North America (31%), and Asia/Pacific (17%). Only 10.5% of the participants answered correctly all 4 “interpretation” questions while 9.2% answered all questions incorrectly. Regarding each question, 51.1%, 71.4%, and 40.6% of the participants correctly interpreted statistical significance of a given odds ratio, risk ratio, and weighted mean difference with 95% confidence intervals respectively, while 43.5% correctly replied that no statistical model can adjust for clinical heterogeneity. Clinicians had more correct answers than non-clinicians (mean score ± standard deviation: 2.27±1.06 versus 1.83±1.14, p<0.001); among clinicians, there was a trend towards a higher score in medical specialists (2.37±1.07 versus 2.04±1.04, p = 0.06) and a lower score in clinical laboratory specialists (1.7±0.95 versus 2.3±1.06, p = 0.08). No association was observed between the respondents' region or questionnaire completion time and participants' score. Conclusion A considerable proportion of researchers, randomly selected from a diverse international sample of biomedical scientists, misinterpreted statistical terms commonly reported in meta-analyses. Authors could be prompted to explicitly interpret their findings to prevent misunderstandings and readers are encouraged to keep up with basic biostatistics. PMID:23326299

  3. Design and Statistical Analyses of Oral Medicine Studies:Common Pitfalls

    PubMed Central

    Baccaglini, Lorena; Shuster, Jonathan J.; Cheng, Jing; Theriaque, Douglas W.; Schoenbach, Victor J.; Tomar, Scott L.; Poole, Charles

    2010-01-01

    A growing number of articles are emerging in the medical and statistics literature that describe epidemiological and statistical flaws of research studies. Many examples of these deficiencies are encountered in the oral, craniofacial and dental literature. However, only a handful of methodological articles have been published in the oral literature warning investigators of potential errors that may arise early in the study and that can irreparably bias the final results. In this paper we briefly review some of the most common pitfalls that our team of epidemiologists and statisticians has identified during the review of submitted or published manuscripts and research grant applications. We use practical examples from the oral medicine and dental literature to illustrate potential shortcomings in the design and analyses of research studies, and how these deficiencies may affect the results and their interpretation. A good study design is essential, because errors in the analyses can be corrected if the design was sound, but flaws in study design can lead to data that are not salvageable. We recommend consultation with an epidemiologist or a statistician during the planning phase of a research study to optimize study efficiency, minimize potential sources of bias, and document the analytic plan. PMID:19874532

  4. A weighted U-statistic for genetic association analyses of sequencing data.

    PubMed

    Wei, Changshuai; Li, Ming; He, Zihuai; Vsevolozhskaya, Olga; Schaid, Daniel J; Lu, Qing

    2014-12-01

    With advancements in next-generation sequencing technology, a massive amount of sequencing data is generated, which offers a great opportunity to comprehensively investigate the role of rare variants in the genetic etiology of complex diseases. Nevertheless, the high-dimensional sequencing data poses a great challenge for statistical analysis. The association analyses based on traditional statistical methods suffer substantial power loss because of the low frequency of genetic variants and the extremely high dimensionality of the data. We developed a Weighted U Sequencing test, referred to as WU-SEQ, for the high-dimensional association analysis of sequencing data. Based on a nonparametric U-statistic, WU-SEQ makes no assumption of the underlying disease model and phenotype distribution, and can be applied to a variety of phenotypes. Through simulation studies and an empirical study, we showed that WU-SEQ outperformed a commonly used sequence kernel association test (SKAT) method when the underlying assumptions were violated (e.g., the phenotype followed a heavy-tailed distribution). Even when the assumptions were satisfied, WU-SEQ still attained comparable performance to SKAT. Finally, we applied WU-SEQ to sequencing data from the Dallas Heart Study (DHS), and detected an association between ANGPTL 4 and very low density lipoprotein cholesterol.

  5. Comparisons of power of statistical methods for gene-environment interaction analyses.

    PubMed

    Ege, Markus J; Strachan, David P

    2013-10-01

    Any genome-wide analysis is hampered by reduced statistical power due to multiple comparisons. This is particularly true for interaction analyses, which have lower statistical power than analyses of associations. To assess gene-environment interactions in population settings we have recently proposed a statistical method based on a modified two-step approach, where first genetic loci are selected by their associations with disease and environment, respectively, and subsequently tested for interactions. We have simulated various data sets resembling real world scenarios and compared single-step and two-step approaches with respect to true positive rate (TPR) in 486 scenarios and (study-wide) false positive rate (FPR) in 252 scenarios. Our simulations confirmed that in all two-step methods the two steps are not correlated. In terms of TPR, two-step approaches combining information on gene-disease association and gene-environment association in the first step were superior to all other methods, while preserving a low FPR in over 250 million simulations under the null hypothesis. Our weighted modification yielded the highest power across various degrees of gene-environment association in the controls. An optimal threshold for step 1 depended on the interacting allele frequency and the disease prevalence. In all scenarios, the least powerful method was to proceed directly to an unbiased full interaction model, applying conventional genome-wide significance thresholds. This simulation study confirms the practical advantage of two-step approaches to interaction testing over more conventional one-step designs, at least in the context of dichotomous disease outcomes and other parameters that might apply in real-world settings.

  6. Systematic Mapping and Statistical Analyses of Valley Landform and Vegetation Asymmetries Across Hydroclimatic Gradients

    NASA Astrophysics Data System (ADS)

    Poulos, M. J.; Pierce, J. L.; McNamara, J. P.; Flores, A. N.; Benner, S. G.

    2015-12-01

    Terrain aspect alters the spatial distribution of insolation across topography, driving eco-pedo-hydro-geomorphic feedbacks that can alter landform evolution and result in valley asymmetries for a suite of land surface characteristics (e.g. slope length and steepness, vegetation, soil properties, and drainage development). Asymmetric valleys serve as natural laboratories for studying how landscapes respond to climate perturbation. In the semi-arid montane granodioritic terrain of the Idaho batholith, Northern Rocky Mountains, USA, prior works indicate that reduced insolation on northern (pole-facing) aspects prolongs snow pack persistence, and is associated with thicker, finer-grained soils, that retain more water, prolong the growing season, support coniferous forest rather than sagebrush steppe ecosystems, stabilize slopes at steeper angles, and produce sparser drainage networks. We hypothesize that the primary drivers of valley asymmetry development are changes in the pedon-scale water-balance that coalesce to alter catchment-scale runoff and drainage development, and ultimately cause the divide between north and south-facing land surfaces to migrate northward. We explore this conceptual framework by coupling land surface analyses with statistical modeling to assess relationships and the relative importance of land surface characteristics. Throughout the Idaho batholith, we systematically mapped and tabulated various statistical measures of landforms, land cover, and hydroclimate within discrete valley segments (n=~10,000). We developed a random forest based statistical model to predict valley slope asymmetry based upon numerous measures (n>300) of landscape asymmetries. Preliminary results suggest that drainages are tightly coupled with hillslopes throughout the region, with drainage-network slope being one of the strongest predictors of land-surface-averaged slope asymmetry. When slope-related statistics are excluded, due to possible autocorrelation, valley

  7. How does the Danish Groundwater Monitoring Programme support statistical consistent nitrate trend analyses in groundwater?

    NASA Astrophysics Data System (ADS)

    Hansen, Birgitte; Thorling, Lærke; Sørensen, Brian; Dalgaard, Tommy; Erlandsen, Mogens

    2013-04-01

    The overall aim of performing nitrate trend analyses in oxic groundwater is to document the effect of regulation of Danish agriculture on N pollution. The design of the Danish Groundwater Monitoring Programme is presented and discussed in relation to performance of statistical consistence nitrate trend analyses. Three types of data are crucial. Firstly, long and continuous time-series from the national groundwater monitoring network enable a statistically systematic analysis of distribution, trends and trend reversals in the groundwater nitrate concentration. Secondly, knowledge about the N surplus in Danish agriculture since 1950 from Denmark Statistics is used as an indicator of the potential loss of N. Thirdly, groundwater recharge age determination are performed in order to allow linking of the first two dataset. Recent results published in Hansen et al. (2011 & 2012) will be presented. Since the 1980s, regulations implemented by Danish farmers have succeeded in optimizing the N (nitrogen) management at farm level. As a result, the upward agricultural N surplus trend has been reversed, and the N surplus has reduced by 30-55% from 1980 to 2007 depending on region. The reduction in the N surplus served to reduce the losses of N from agriculture, with documented positive effects on nature and the environment in Denmark. In groundwater, the upward trend in nitrate concentrations was reversed around 1980, and a larger number of downward nitrate trends were seen in the youngest groundwater compared with the oldest groundwater. However, on average, approximately 48% of the oxic monitored groundwater has nitrate concentrations above the groundwater and drinking water standards of 50 mg/l. Furthermore, trend analyses show that 33% of all the monitored groundwater has upward nitrate trends, while only 18% of the youngest groundwater has upward nitrate trends according to data sampled from 1988-2009. A regional analysis shows a correlation between a high level of N

  8. Statistical analyses to support guidelines for marine avian sampling. Final report

    USGS Publications Warehouse

    Kinlan, Brian P.; Zipkin, Elise; O'Connell, Allan F.; Caldow, Chris

    2012-01-01

    distribution to describe counts of a given species in a particular region and season. 4. Using a large database of historical at-sea seabird survey data, we applied this technique to identify appropriate statistical distributions for modeling a variety of species, allowing the distribution to vary by season. For each species and season, we used the selected distribution to calculate and map retrospective statistical power to detect hotspots and coldspots, and map pvalues from Monte Carlo significance tests of hotspots and coldspots, in discrete lease blocks designated by the U.S. Department of Interior, Bureau of Ocean Energy Management (BOEM). 5. Because our definition of hotspots and coldspots does not explicitly include variability over time, we examine the relationship between the temporal scale of sampling and the proportion of variance captured in time series of key environmental correlates of marine bird abundance, as well as available marine bird abundance time series, and use these analyses to develop recommendations for the temporal distribution of sampling to adequately represent both shortterm and long-term variability. We conclude by presenting a schematic “decision tree” showing how this power analysis approach would fit in a general framework for avian survey design, and discuss implications of model assumptions and results. We discuss avenues for future development of this work, and recommendations for practical implementation in the context of siting and wildlife assessment for offshore renewable energy development projects.

  9. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis

    PubMed Central

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-01-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis. PMID:26401064

  10. ADDITIONAL STRESS AND FRACTURE MECHANICS ANALYSES OF PRESSURIZED WATER REACTOR PRESSURE VESSEL NOZZLES

    SciTech Connect

    Walter, Matthew; Yin, Shengjun; Stevens, Gary; Sommerville, Daniel; Palm, Nathan; Heinecke, Carol

    2012-01-01

    In past years, the authors have undertaken various studies of nozzles in both boiling water reactors (BWRs) and pressurized water reactors (PWRs) located in the reactor pressure vessel (RPV) adjacent to the core beltline region. Those studies described stress and fracture mechanics analyses performed to assess various RPV nozzle geometries, which were selected based on their proximity to the core beltline region, i.e., those nozzle configurations that are located close enough to the core region such that they may receive sufficient fluence prior to end-of-life (EOL) to require evaluation of embrittlement as part of the RPV analyses associated with pressure-temperature (P-T) limits. In this paper, additional stress and fracture analyses are summarized that were performed for additional PWR nozzles with the following objectives: To expand the population of PWR nozzle configurations evaluated, which was limited in the previous work to just two nozzles (one inlet and one outlet nozzle). To model and understand differences in stress results obtained for an internal pressure load case using a two-dimensional (2-D) axi-symmetric finite element model (FEM) vs. a three-dimensional (3-D) FEM for these PWR nozzles. In particular, the ovalization (stress concentration) effect of two intersecting cylinders, which is typical of RPV nozzle configurations, was investigated. To investigate the applicability of previously recommended linear elastic fracture mechanics (LEFM) hand solutions for calculating the Mode I stress intensity factor for a postulated nozzle corner crack for pressure loading for these PWR nozzles. These analyses were performed to further expand earlier work completed to support potential revision and refinement of Title 10 to the U.S. Code of Federal Regulations (CFR), Part 50, Appendix G, Fracture Toughness Requirements, and are intended to supplement similar evaluation of nozzles presented at the 2008, 2009, and 2011 Pressure Vessels and Piping (PVP

  11. Additional Measurements and Analyses of H217O and H218O

    NASA Astrophysics Data System (ADS)

    Pearson, John; Yu, Shanshan; Walters, Adam; Daly, Adam M.

    2015-06-01

    Historically the analysis of the spectrum of water has been a balance between the quality of the data set and the applicability of the Hamiltonian to a highly non-rigid molecule. Recently, a number of different non-rigid analysis approaches have successfully been applied to 16O water resulting in a self-consistent set of transitions and energy levels to high J which allowed the spectrum to be modeled to experimental precision. The data set for 17O and 18O water was previously reviewed and many of the problematic measurements identified, but Hamiltonian modeling of the remaining data resulted in significantly poorer quality fits than that for the 16O parent. As a result, we have made additional microwave measurements and modeled the existing 17O and 18O data sets with an Euler series model. This effort has illuminated a number of additional problematic measurements in the previous data sets and has resulted in analyses of 17O and 18O water that are of similar quality to the 16O analysis. We report the new lines, the analyses and make recommendations on the quality of the experimental data sets. SS. Yu, J.C. Pearson, B.J. Drouin et al. J. Mol. Spectrosc. 279,~16-25 (2012) J. Tennyson, P.F. Bernath, L.R. Brown et al. J. Quant. Spectrosc. Rad. Trans. 117, 29-58 (2013) J. Tennyson, P.F. Bernath, L.R. Brown et al. J. Quant. Spectrosc. Rad. Trans. 110, 573-596 (2009) H.M. Pickett, J.C. Pearson, C.E. Miller J. Mol. Spectrosc. 233, 174-179 (2005)

  12. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    SciTech Connect

    Edjabou, Maklawe Essonanawe; Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn; Petersen, Claus; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2015-02-15

    Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single

  13. Reporting of allocation method and statistical analyses that deal with bilaterally affected wrists in clinical trials for carpal tunnel syndrome.

    PubMed

    Page, Matthew J; O'Connor, Denise A; Pitt, Veronica; Massy-Westropp, Nicola

    2013-11-01

    The authors aimed to describe how often the allocation method and the statistical analyses that deal with bilateral involvement are reported in clinical trials for carpal tunnel syndrome and to determine whether reporting has improved over time. Forty-two trials identified from recently published systematic reviews were assessed. Information about allocation method and statistical analyses was obtained from published reports and trialists. Only 15 trialists (36%) reported the method of random sequence generation used, and 6 trialists (14%) reported the method of allocation concealment used. Of 25 trials including participants with bilateral carpal tunnel syndrome, 17 (68%) reported the method used to allocate the wrists, whereas only 1 (4%) reported using a statistical analysis that appropriately dealt with bilateral involvement. There was no clear trend of improved reporting over time. Interventions are needed to improve reporting quality and statistical analyses of these trials so that these can provide more reliable evidence to inform clinical practice.

  14. Diurnal fluctuations in brain volume: Statistical analyses of MRI from large populations.

    PubMed

    Nakamura, Kunio; Brown, Robert A; Narayanan, Sridar; Collins, D Louis; Arnold, Douglas L

    2015-09-01

    We investigated fluctuations in brain volume throughout the day using statistical modeling of magnetic resonance imaging (MRI) from large populations. We applied fully automated image analysis software to measure the brain parenchymal fraction (BPF), defined as the ratio of the brain parenchymal volume and intracranial volume, thus accounting for variations in head size. The MRI data came from serial scans of multiple sclerosis (MS) patients in clinical trials (n=755, 3269 scans) and from subjects participating in the Alzheimer's Disease Neuroimaging Initiative (ADNI, n=834, 6114 scans). The percent change in BPF was modeled with a linear mixed effect (LME) model, and the model was applied separately to the MS and ADNI datasets. The LME model for the MS datasets included random subject effects (intercept and slope over time) and fixed effects for the time-of-day, time from the baseline scan, and trial, which accounted for trial-related effects (for example, different inclusion criteria and imaging protocol). The model for ADNI additionally included the demographics (baseline age, sex, subject type [normal, mild cognitive impairment, or Alzheimer's disease], and interaction between subject type and time from baseline). There was a statistically significant effect of time-of-day on the BPF change in MS clinical trial datasets (-0.180 per day, that is, 0.180% of intracranial volume, p=0.019) as well as the ADNI dataset (-0.438 per day, that is, 0.438% of intracranial volume, p<0.0001), showing that the brain volume is greater in the morning. Linearly correcting the BPF values with the time-of-day reduced the required sample size to detect a 25% treatment effect (80% power and 0.05 significance level) on change in brain volume from 2 time-points over a period of 1year by 2.6%. Our results have significant implications for future brain volumetric studies, suggesting that there is a potential acquisition time bias that should be randomized or statistically controlled to

  15. Review of Statistical Analyses Resulting from Performance of HLDWD- DWPF-005

    SciTech Connect

    Beck, R.S.

    1997-09-29

    The Engineering Department at the Defense Waste Processing Facility (DWPF) has reviewed two reports from the Statistical Consulting Section (SCS) involving the statistical analysis of test results for analysis of small sample inserts (references 1{ampersand}2). The test results cover two proposed analytical methods, a room temperature hydrofluoric acid preparation (Cold Chem) and a sodium peroxide/sodium hydroxide fusion modified for insert samples (Modified Fusion). The reports support implementation of the proposed small sample containers and analytical methods at DWPF. Hydragard sampler valve performance was typical of previous results (reference 3). Using an element from each major feed stream. lithium from the frit and iron from the sludge, the sampler was determined to deliver a uniform mixture in either sample container.The lithium to iron ratios were equivalent for the standard 15 ml vial and the 3 ml insert.The proposed method provide equivalent analyses as compared to the current methods. The biases associated with the proposed methods on a vitrified basis are less than 5% for major elements. The sum of oxides for the proposed method compares favorably with the sum of oxides for the conventional methods. However, the average sum of oxides for the Cold Chem method was 94.3% which is below the minimum required recovery of 95%. Both proposed methods, cold Chem and Modified Fusion, will be required at first to provide an accurate analysis which will routinely meet the 95% and 105% average sum of oxides limit for Product Composition Control System (PCCS).Issued to be resolved during phased implementation are as follows: (1) Determine calcine/vitrification factor for radioactive feed; (2) Evaluate covariance matrix change against process operating ranges to determine optimum sample size; (3) Evaluate sources for low sum of oxides; and (4) Improve remote operability of production versions of equipment and instruments for installation in 221-S.The specifics of

  16. How to produce personality neuroscience research with high statistical power and low additional cost.

    PubMed

    Mar, Raymond A; Spreng, R Nathan; Deyoung, Colin G

    2013-09-01

    Personality neuroscience involves examining relations between cognitive or behavioral variability and neural variables like brain structure and function. Such studies have uncovered a number of fascinating associations but require large samples, which are expensive to collect. Here, we propose a system that capitalizes on neuroimaging data commonly collected for separate purposes and combines it with new behavioral data to test novel hypotheses. Specifically, we suggest that groups of researchers compile a database of structural (i.e., anatomical) and resting-state functional scans produced for other task-based investigations and pair these data with contact information for the participants who contributed the data. This contact information can then be used to collect additional cognitive, behavioral, or individual-difference data that are then reassociated with the neuroimaging data for analysis. This would allow for novel hypotheses regarding brain-behavior relations to be tested on the basis of large sample sizes (with adequate statistical power) for low additional cost. This idea can be implemented at small scales at single institutions, among a group of collaborating researchers, or perhaps even within a single lab. It can also be implemented at a large scale across institutions, although doing so would entail a number of additional complications.

  17. Structured multiplicity and confirmatory statistical analyses in pharmacodynamic studies using the quantitative electroencephalogram.

    PubMed

    Ferber, Georg; Staner, Luc; Boeijinga, Peter

    2011-09-30

    Pharmacodynamic (PD) clinical studies are characterised by a high degree of multiplicity. This multiplicity is the result of the design of these studies that typically investigate effects of a number of biomarkers at various doses and multiple time points. Measurements are taken at many or all points of a "hyper-grid" that can be understood as the cross-product of a number of dimensions each of which has typically 3-30 discrete values. This exploratory design helps understanding the phenomena under investigation, but has made a confirmatory statistical analysis of these studies difficult, so that such an analysis is often missing in this type of studies. In this contribution we show that the cross-product structure of PD studies allows to combine several well-known techniques to address multiplicity in an effective way, so that a confirmatory analysis of these studies becomes feasible without unrealistic loss of power. We demonstrate the application of this technique in two studies that use the quantitative EEG (qEEG) as biomarker for drug activity at the GABA-A receptor. QEEG studies suffer particularly from the curse of multiplicity, since, in addition to the common dimensions like dose and time, the qEEG is measured at many locations over the scalp and in a number of frequency bands which inflate the multiplicity by a factor of about 250.

  18. Parametric and Nonparametric Statistical Methods for Genomic Selection of Traits with Additive and Epistatic Genetic Architectures

    PubMed Central

    Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.

    2014-01-01

    Parametric and nonparametric methods have been developed for purposes of predicting phenotypes. These methods are based on retrospective analyses of empirical data consisting of genotypic and phenotypic scores. Recent reports have indicated that parametric methods are unable to predict phenotypes of traits with known epistatic genetic architectures. Herein, we review parametric methods including least squares regression, ridge regression, Bayesian ridge regression, least absolute shrinkage and selection operator (LASSO), Bayesian LASSO, best linear unbiased prediction (BLUP), Bayes A, Bayes B, Bayes C, and Bayes Cπ. We also review nonparametric methods including Nadaraya-Watson estimator, reproducing kernel Hilbert space, support vector machine regression, and neural networks. We assess the relative merits of these 14 methods in terms of accuracy and mean squared error (MSE) using simulated genetic architectures consisting of completely additive or two-way epistatic interactions in an F2 population derived from crosses of inbred lines. Each simulated genetic architecture explained either 30% or 70% of the phenotypic variability. The greatest impact on estimates of accuracy and MSE was due to genetic architecture. Parametric methods were unable to predict phenotypic values when the underlying genetic architecture was based entirely on epistasis. Parametric methods were slightly better than nonparametric methods for additive genetic architectures. Distinctions among parametric methods for additive genetic architectures were incremental. Heritability, i.e., proportion of phenotypic variability, had the second greatest impact on estimates of accuracy and MSE. PMID:24727289

  19. Genetic and statistical analyses of strong selection on polygenic traits: What, me normal?

    SciTech Connect

    Turelli, M.; Barton, N.H.

    1994-11-01

    We develop a general population genetic framework for analyzing selection on many loci, and apply it to strong truncation and disruptive selection on an additive polygenic trait. We first present statistical methods for analyzing the infinitesimal model, in which offspring breeding values are normally distributed around the mean of the parents, with fixed variance. The usual assumption of a Gaussian distribution of breeding values in the population gives remarkably accurate predictions for the mean and the variance, even when disruptive selection generates substantial deviations from normality. We then set out a general genetic analysis of selection and recombination. The population is represented by multilocus cumulants describing the distribution of haploid genotypes, and selection is described by the relation between mean fitness and these cumulants. We provide exact recursions in terms of generating functions for the effects of selection on non-central moments. The new cumulants that describe the next generation are computed from the non-central moments. Numerical multilocus results show that the standard Gaussian approximation gives accurate predictions for the dynamics of the mean and genetic variance in this limit. Even with intense truncation selection, linkage disequilibria of order three and higher never cause much deviation from normality. Thus, the empirical deviations frequently found between predicted and observed responses to artificial selection are not caused by linkage-disequilibrium-induced departures from normality. Disruptive selection can generate substantial four-way disequilibria, and hence kurtosis; but even then, the Gaussian assumption predicts the variance accurately. In contrast to the apparent simplicity of the infinitesimal limit, data suggest that changes in genetic variance after 10 or more generations of selection are likely to be dominated by allele frequency dynamics that depend on genetic details. 51 refs., 11 figs., 3 tabs.

  20. Genetic and Statistical Analyses of Strong Selection on Polygenic Traits: What, Me Normal?

    PubMed Central

    Turelli, M.; Barton, N. H.

    1994-01-01

    We develop a general population genetic framework for analyzing selection on many loci, and apply it to strong truncation and disruptive selection on an additive polygenic trait. We first present statistical methods for analyzing the infinitesimal model, in which offspring breeding values are normally distributed around the mean of the parents, with fixed variance. These show that the usual assumption of a Gaussian distribution of breeding values in the population gives remarkably accurate predictions for the mean and the variance, even when disruptive selection generates substantial deviations from normality. We then set out a general genetic analysis of selection and recombination. The population is represented by multilocus cumulants describing the distribution of haploid genotypes, and selection is described by the relation between mean fitness and these cumulants. We provide exact recursions in terms of generating functions for the effects of selection on non-central moments. The effects of recombination are simply calculated as a weighted sum over all the permutations produced by meiosis. Finally, the new cumulants that describe the next generation are computed from the non-central moments. Although this scheme is applied here in detail only to selection on an additive trait, it is quite general. For arbitrary epistasis and linkage, we describe a consistent infinitesimal limit in which the short-term selection response is dominated by infinitesimal allele frequency changes and linkage disequilibria. Numerical multilocus results show that the standard Gaussian approximation gives accurate predictions for the dynamics of the mean and genetic variance in this limit. Even with intense truncation selection, linkage disequilibria of order three and higher never cause much deviation from normality. Thus, the empirical deviations frequently found between predicted and observed responses to artificial selection are not caused by linkage

  1. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations, 2. Robustness of Techniques

    SciTech Connect

    Helton, J.C.; Kleijnen, J.P.C.

    1999-03-24

    Procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses are described and illustrated. These procedures attempt to detect increasingly complex patterns in scatterplots and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. A sequence of example analyses with a large model for two-phase fluid flow illustrates how the individual procedures can differ in the variables that they identify as having effects on particular model outcomes. The example analyses indicate that the use of a sequence of procedures is a good analysis strategy and provides some assurance that an important effect is not overlooked.

  2. Using Additional Analyses to Clarify the Functions of Problem Behavior: An Analysis of Two Cases

    ERIC Educational Resources Information Center

    Payne, Steven W.; Dozier, Claudia L.; Neidert, Pamela L.; Jowett, Erica S.; Newquist, Matthew H.

    2014-01-01

    Functional analyses (FA) have proven useful for identifying contingencies that influence problem behavior. Research has shown that some problem behavior may only occur in specific contexts or be influenced by multiple or idiosyncratic variables. When these contexts or sources of influence are not assessed in an FA, further assessment may be…

  3. The use and misuse of statistical analyses. [in geophysics and space physics

    NASA Technical Reports Server (NTRS)

    Reiff, P. H.

    1983-01-01

    The statistical techniques most often used in space physics include Fourier analysis, linear correlation, auto- and cross-correlation, power spectral density, and superposed epoch analysis. Tests are presented which can evaluate the significance of the results obtained through each of these. Data presented without some form of error analysis are frequently useless, since they offer no way of assessing whether a bump on a spectrum or on a superposed epoch analysis is real or merely a statistical fluctuation. Among many of the published linear correlations, for instance, the uncertainty in the intercept and slope is not given, so that the significance of the fitted parameters cannot be assessed.

  4. The Use of Verb Information in Parsing: Different Statistical Analyses Lead to Contradictory Conclusions

    ERIC Educational Resources Information Center

    Kennison, Shelia M.

    2009-01-01

    The research investigated how comprehenders use verb information during syntactic parsing. Two reading experiments investigated the relationship between verb-specific variables and reading time. These experiments were close replications of prior work; however, two statistical techniques were used, rather than one. These were item-by-item…

  5. The Effects of Two Types of Sampling Error on Common Statistical Analyses.

    ERIC Educational Resources Information Center

    Arnold, Margery E.

    Sampling error refers to variability that is unique to the sample. If the sample is the entire population, then there is no sampling error. A related point is that sampling error is a function of sample size, as a hypothetical example illustrates. As the sample statistics more and more closely approximate the population parameters, the sampling…

  6. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  7. Thinking About Data, Research Methods, and Statistical Analyses: Commentary on Sijtsma's (2014) "Playing with Data".

    PubMed

    Waldman, Irwin D; Lilienfeld, Scott O

    2016-03-01

    We comment on Sijtsma's (2014) thought-provoking essay on how to minimize questionable research practices (QRPs) in psychology. We agree with Sijtsma that proactive measures to decrease the risk of QRPs will ultimately be more productive than efforts to target individual researchers and their work. In particular, we concur that encouraging researchers to make their data and research materials public is the best institutional antidote against QRPs, although we are concerned that Sijtsma's proposal to delegate more responsibility to statistical and methodological consultants could inadvertently reinforce the dichotomy between the substantive and statistical aspects of research. We also discuss sources of false-positive findings and replication failures in psychological research, and outline potential remedies for these problems. We conclude that replicability is the best metric of the minimization of QRPs and their adverse effects on psychological research.

  8. Time series expression analyses using RNA-seq: a statistical approach.

    PubMed

    Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P

    2013-01-01

    RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis. PMID:23586021

  9. Statistic analyses of the color experience according to the age of the observer.

    PubMed

    Hunjet, Anica; Parac-Osterman, Durdica; Vucaj, Edita

    2013-04-01

    Psychological experience of color is a real state of the communication between the environment and color, and it will depend on the source of the light, angle of the view, and particular on the observer and his health condition. Hering's theory or a theory of the opponent processes supposes that cones, which are situated in the retina of the eye, are not sensible on the three chromatic domains (areas, fields, zones) (red, green and purple-blue), but they produce a signal based on the principle of the opposed pairs of colors. A reason of this theory depends on the fact that certain disorders of the color eyesight, which include blindness to certain colors, cause blindness to pairs of opponent colors. This paper presents a demonstration of the experience of blue and yellow tone according to the age of the observer. For the testing of the statistically significant differences in the omission in the color experience according to the color of the background we use following statistical tests: Mann-Whitnney U Test, Kruskal-Wallis ANOVA and Median test. It was proven that the differences are statistically significant in the elderly persons (older than 35 years).

  10. Calibration of back-analysed model parameters for landslides using classification statistics

    NASA Astrophysics Data System (ADS)

    Cepeda, Jose; Henderson, Laura

    2016-04-01

    Back-analyses are useful for characterizing the geomorphological and mechanical processes and parameters involved in the initiation and propagation of landslides. These processes and parameters can in turn be used for improving forecasts of scenarios and hazard assessments in areas or sites which have similar settings to the back-analysed cases. The selection of the modeled landslide that produces the best agreement with the actual observations requires running a number of simulations by varying the type of model and the sets of input parameters. The comparison of the simulated and observed parameters is normally performed by visual comparison of geomorphological or dynamic variables (e.g., geometry of scarp and final deposit, maximum velocities and depths). Over the past six years, a method developed by NGI has been used by some researchers for a more objective selection of back-analysed input model parameters. That method includes an adaptation of the equations for calculation of classifiers, and a comparative evaluation of classifiers of the selected parameter sets in the Receiver Operating Characteristic (ROC) space. This contribution presents an updating of the methodology. The proposed procedure allows comparisons between two or more "clouds" of classifiers. Each cloud represents the performance of a model over a range of input parameters (e.g., samples of probability distributions). Considering the fact that each cloud does not necessarily produce a full ROC curve, two new normalised ROC-space parameters are introduced for characterizing the performance of each cloud. The first parameter is representative of the cloud position relative to the point of perfect classification. The second parameter characterizes the position of the cloud relative to the theoretically perfect ROC curve and the no-discrimination line. The methodology is illustrated with back-analyses of slope stability and landslide runout of selected case studies. This research activity has been

  11. Metagenomic analyses of the late Pleistocene permafrost - additional tools for reconstruction of environmental conditions

    NASA Astrophysics Data System (ADS)

    Rivkina, Elizaveta; Petrovskaya, Lada; Vishnivetskaya, Tatiana; Krivushin, Kirill; Shmakova, Lyubov; Tutukina, Maria; Meyers, Arthur; Kondrashov, Fyodor

    2016-04-01

    A comparative analysis of the metagenomes from two 30 000-year-old permafrost samples, one of lake-alluvial origin and the other from late Pleistocene Ice Complex sediments, revealed significant differences within microbial communities. The late Pleistocene Ice Complex sediments (which have been characterized by the absence of methane with lower values of redox potential and Fe2+ content) showed a low abundance of methanogenic archaea and enzymes from both the carbon and nitrogen cycles, but a higher abundance of enzymes associated with the sulfur cycle. The metagenomic and geochemical analyses described in the paper provide evidence that the formation of the sampled late Pleistocene Ice Complex sediments likely took place under much more aerobic conditions than lake-alluvial sediments.

  12. An Embedded Statistical Method for Coupling Molecular Dynamics and Finite Element Analyses

    NASA Technical Reports Server (NTRS)

    Saether, E.; Glaessgen, E.H.; Yamakov, V.

    2008-01-01

    The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.

  13. Stationary wavelet transform and higher order statistical analyses of intrafascicular nerve recordings.

    PubMed

    Qiao, Shaoyu; Torkamani-Azar, Mastaneh; Salama, Paul; Yoshida, Ken

    2012-10-01

    Nerve signals were recorded from the sciatic nerve of the rabbits in the acute experiments with multi-channel thin-film longitudinal intrafascicular electrodes. 5.5 s sequences of quiescent and high-level nerve activity were spectrally decomposed by applying a ten-level stationary wavelet transform with the Daubechies 10 (Db10) mother wavelet. Then, the statistical distributions of the raw and subband-decomposed sequences were estimated and used to fit a fourth-order Pearson distribution as well as check for normality. The results indicated that the raw and decomposed background and high-level nerve activity distributions were nearly zero-mean and non-skew. All distributions with the frequency content above 187.5 Hz were leptokurtic except for the first-level decomposition representing frequencies in the subband between 12 and 24 kHz, which was Gaussian. This suggests that nerve activity acts to change the statistical distribution of the recording. The results further demonstrated that quiescent recording contained a mixture of an underlying pink noise and low-level nerve activity that could not be silenced. The signal-to-noise ratios based upon the standard deviation (SD) and kurtosis were estimated, and the latter was found as an effective measure for monitoring the nerve activity residing in different frequency subbands. The nerve activity modulated kurtosis along with SD, suggesting that the joint use of SD and kurtosis could improve the stability and detection accuracy of spike-detection algorithms. Finally, synthesizing the reconstructed subband signals following denoising based upon the higher order statistics of the subband-decomposed coefficient sequences allowed us to effectively purify the signal without distorting spike shape.

  14. Bayesian statistical approaches to compositional analyses of transgenic crops 2. Application and validation of informative prior distributions.

    PubMed

    Harrison, Jay M; Breeze, Matthew L; Berman, Kristina H; Harrigan, George G

    2013-03-01

    Bayesian approaches to evaluation of crop composition data allow simpler interpretations than traditional statistical significance tests. An important advantage of Bayesian approaches is that they allow formal incorporation of previously generated data through prior distributions in the analysis steps. This manuscript describes key steps to ensure meaningful and transparent selection and application of informative prior distributions. These include (i) review of previous data in the scientific literature to form the prior distributions, (ii) proper statistical model specification and documentation, (iii) graphical analyses to evaluate the fit of the statistical model to new study data, and (iv) sensitivity analyses to evaluate the robustness of results to the choice of prior distribution. The validity of the prior distribution for any crop component is critical to acceptance of Bayesian approaches to compositional analyses and would be essential for studies conducted in a regulatory setting. Selection and validation of prior distributions for three soybean isoflavones (daidzein, genistein, and glycitein) and two oligosaccharides (raffinose and stachyose) are illustrated in a comparative assessment of data obtained on GM and non-GM soybean seed harvested from replicated field sites at multiple locations in the US during the 2009 growing season. PMID:23261475

  15. Bayesian statistical approaches to compositional analyses of transgenic crops 2. Application and validation of informative prior distributions.

    PubMed

    Harrison, Jay M; Breeze, Matthew L; Berman, Kristina H; Harrigan, George G

    2013-03-01

    Bayesian approaches to evaluation of crop composition data allow simpler interpretations than traditional statistical significance tests. An important advantage of Bayesian approaches is that they allow formal incorporation of previously generated data through prior distributions in the analysis steps. This manuscript describes key steps to ensure meaningful and transparent selection and application of informative prior distributions. These include (i) review of previous data in the scientific literature to form the prior distributions, (ii) proper statistical model specification and documentation, (iii) graphical analyses to evaluate the fit of the statistical model to new study data, and (iv) sensitivity analyses to evaluate the robustness of results to the choice of prior distribution. The validity of the prior distribution for any crop component is critical to acceptance of Bayesian approaches to compositional analyses and would be essential for studies conducted in a regulatory setting. Selection and validation of prior distributions for three soybean isoflavones (daidzein, genistein, and glycitein) and two oligosaccharides (raffinose and stachyose) are illustrated in a comparative assessment of data obtained on GM and non-GM soybean seed harvested from replicated field sites at multiple locations in the US during the 2009 growing season.

  16. Statistics

    Cancer.gov

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  17. JEAB Research Over Time: Species Used, Experimental Designs, Statistical Analyses, and Sex of Subjects.

    PubMed

    Zimmermann, Zachary J; Watkins, Erin E; Poling, Alan

    2015-10-01

    We examined the species used as subjects in every article published in the Journal of the Experimental Analysis of Behavior (JEAB) from 1958 through 2013. We also determined the sex of subjects in every article with human subjects (N = 524) and in an equal number of randomly selected articles with nonhuman subjects, as well as the general type of experimental designs used. Finally, the percentage of articles reporting an inferential statistic was determined at 5-year intervals. In all, 35,317 subjects were studied in 3,084 articles; pigeons ranked first and humans second in number used. Within-subject experimental designs were more popular than between-subjects designs regardless of whether human or nonhuman subjects were studied but were used in a higher percentage of articles with nonhumans (75.4 %) than in articles with humans (68.2 %). The percentage of articles reporting an inferential statistic has increased over time, and more than half of the articles published in 2005 and 2010 reported one. Researchers who publish in JEAB frequently depart from Skinner's preferred research strategy, but it is not clear whether such departures are harmful. Finally, the sex of subjects was not reported in a sizable percentage of articles with both human and nonhuman subjects. This is an unfortunate oversight. PMID:27606171

  18. Basic statistical analyses of candidate nickel-hydrogen cells for the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Maloney, Thomas M.; Frate, David T.

    1993-01-01

    Nickel-Hydrogen (Ni/H2) secondary batteries will be implemented as a power source for the Space Station Freedom as well as for other NASA missions. Consequently, characterization tests of Ni/H2 cells from Eagle-Picher, Whittaker-Yardney, and Hughes were completed at the NASA Lewis Research Center. Watt-hour efficiencies of each Ni/H2 cell were measured for regulated charge and discharge cycles as a function of temperature, charge rate, discharge rate, and state of charge. Temperatures ranged from -5 C to 30 C, charge rates ranged from C/10 to 1C, discharge rates ranged from C/10 to 2C, and states of charge ranged from 20 percent to 100 percent. Results from regression analyses and analyses of mean watt-hour efficiencies demonstrated that overall performance was best at temperatures between 10 C and 20 C while the discharge rate correlated most strongly with watt-hour efficiency. In general, the cell with back-to-back electrode arrangement, single stack, 26 percent KOH, and serrated zircar separator and the cell with a recirculating electrode arrangement, unit stack, 31 percent KOH, zircar separators performed best.

  19. Statistical methodology used in analyses of data from DOE experimental animal studies

    SciTech Connect

    Gilbert, E.S.; Griffith, W.C.; Carnes, B.A.

    1995-07-01

    This document describes many of the statistical approaches that are being used to analyze data from life-span animal studies conducted under the Department of Energy experimental radiobiology program. The methods, which are intended to be as informative as possible for assessing human health risks, account for time-related factors and competing risks, and are reasonably comparable to methods used for analyzing data from human epidemiologic studies of persons exposed to radiation. The methods described in this report model the hazard, or age-specific risk, as a function of dose and other factors such as dose rate, age at risk, and time since exposure. Both models in which the radiation risk is expressed relative to the baseline risk and models in which this risk is expressed in absolute terms are formulated. Both parametric and non-parametric models for baseline risks are considered, and several dose-response functions are suggested. Tumors in animals are not always the cause of death but instead may be found incidentally to death from other causes. This report gives detailed attention to the context of observation of tumors, and emphasizes an approach that makes use of information provided by the pathologist on whether tumors are fatal or incidental. Special cases are those in which all tumors are observed in a fatal context or in which all tumors are observed in an incidental context. Maximum likelihood theory provides the basis for fitting the suggested models and for making statistical inferences regarding parameters of these models. Approaches in which observations are grouped by intervals of time and possibly other factors are emphasized. This approach is based on iteratively reweighted least squares and uses Poisson weights for tumors considered to be fatal and binomial weights for tumors considered to be incidental.

  20. Chemometric and Statistical Analyses of ToF-SIMS Spectra of Increasingly Complex Biological Samples

    SciTech Connect

    Berman, E S; Wu, L; Fortson, S L; Nelson, D O; Kulp, K S; Wu, K J

    2007-10-24

    Characterizing and classifying molecular variation within biological samples is critical for determining fundamental mechanisms of biological processes that will lead to new insights including improved disease understanding. Towards these ends, time-of-flight secondary ion mass spectrometry (ToF-SIMS) was used to examine increasingly complex samples of biological relevance, including monosaccharide isomers, pure proteins, complex protein mixtures, and mouse embryo tissues. The complex mass spectral data sets produced were analyzed using five common statistical and chemometric multivariate analysis techniques: principal component analysis (PCA), linear discriminant analysis (LDA), partial least squares discriminant analysis (PLSDA), soft independent modeling of class analogy (SIMCA), and decision tree analysis by recursive partitioning. PCA was found to be a valuable first step in multivariate analysis, providing insight both into the relative groupings of samples and into the molecular basis for those groupings. For the monosaccharides, pure proteins and protein mixture samples, all of LDA, PLSDA, and SIMCA were found to produce excellent classification given a sufficient number of compound variables calculated. For the mouse embryo tissues, however, SIMCA did not produce as accurate a classification. The decision tree analysis was found to be the least successful for all the data sets, providing neither as accurate a classification nor chemical insight for any of the tested samples. Based on these results we conclude that as the complexity of the sample increases, so must the sophistication of the multivariate technique used to classify the samples. PCA is a preferred first step for understanding ToF-SIMS data that can be followed by either LDA or PLSDA for effective classification analysis. This study demonstrates the strength of ToF-SIMS combined with multivariate statistical and chemometric techniques to classify increasingly complex biological samples

  1. Additional Development and Systems Analyses of Pneumatic Technology for High Speed Civil Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Englar, Robert J.; Willie, F. Scott; Lee, Warren J.

    1999-01-01

    In the Task I portion of this NASA research grant, configuration development and experimental investigations have been conducted on a series of pneumatic high-lift and control surface devices applied to a generic High Speed Civil Transport (HSCT) model configuration to determine their potential for improved aerodynamic performance, plus stability and control of higher performance aircraft. These investigations were intended to optimize pneumatic lift and drag performance; provide adequate control and longitudinal stability; reduce separation flowfields at high angle of attack; increase takeoff/climbout lift-to-drag ratios; and reduce system complexity and weight. Experimental aerodynamic evaluations were performed on a semi-span HSCT generic model with improved fuselage fineness ratio and with interchangeable plain flaps, blown flaps, pneumatic Circulation Control Wing (CCW) high-lift configurations, plain and blown canards, a novel Circulation Control (CC) cylinder blown canard, and a clean cruise wing for reference. Conventional tail power was also investigated for longitudinal trim capability. Also evaluated was unsteady pulsed blowing of the wing high-lift system to determine if reduced pulsed mass flow rates and blowing requirements could be made to yield the same lift as that resulting from steady-state blowing. Depending on the pulsing frequency applied, reduced mass flow rates were indeed found able to provide lift augmentation at lesser blowing values than for the steady conditions. Significant improvements in the aerodynamic characteristics leading to improved performance and stability/control were identified, and the various components were compared to evaluate the pneumatic potential of each. Aerodynamic results were provided to the Georgia Tech Aerospace System Design Lab. to conduct the companion system analyses and feasibility study (Task 2) of theses concepts applied to an operational advanced HSCT aircraft. Results and conclusions from these

  2. Statistical discrimination of black gel pen inks analysed by laser desorption/ionization mass spectrometry.

    PubMed

    Weyermann, Céline; Bucher, Lukas; Majcherczyk, Paul; Mazzella, Williams; Roux, Claude; Esseiva, Pierre

    2012-04-10

    Pearson correlation coefficients were applied for the objective comparison of 30 black gel pen inks analysed by laser desorption ionization mass spectrometry (LDI-MS). The mass spectra were obtained for ink lines directly on paper using positive and negative ion modes at several laser intensities. This methodology has the advantage of taking into account the reproducibility of the results as well as the variability between spectra of different pens. A differentiation threshold could thus be selected in order to avoid the risk of false differentiation. Combining results from positive and negative mode yielded a discriminating power up to 85%, which was better than the one obtained previously with other optical comparison methodologies. The technique also allowed discriminating between pens from the same brand. PMID:22115723

  3. Reprocessing the Southern Hemisphere ADditional OZonesondes (SHADOZ) Database for Long-Term Trend Analyses

    NASA Astrophysics Data System (ADS)

    Witte, J. C.; Thompson, A. M.; Coetzee, G.; Fujiwara, M.; Johnson, B. J.; Sterling, C. W.; Cullis, P.; Ashburn, C. E.; Jordan, A. F.

    2015-12-01

    SHADOZ is a large archive of tropical balloon-bone ozonesonde data at NASA/Goddard Space Flight Center with data from 14 tropical and subtropical stations provided by collaborators in Europe, Asia, Latin America and Africa . The SHADOZ time series began in 1998, using electrochemical concentration cell (ECC) ozonesondes. Like many long-term sounding stations, SHADOZ is characterized by variations in operating procedures, launch protocols, and data processing such that biases within a data record and among sites appear. In addition, over time, the radiosonde and ozonesonde instruments and data processing protocols have changed, adding to the measurement uncertainties at individual stations and limiting the reliability of ozone profile trends and continuous satellite validation. Currently, the ozonesonde community is engaged in reprocessing ECC data, with an emphasis on homogenization of the records to compensate for the variations in instrumentation and technique. The goals are to improve the information and integrity of each measurement record and to support calculation of more reliable trends. We illustrate the reprocessing activity of SHADOZ with selected stations. We will (1) show reprocessing steps based on the recent WMO report that provides post-processing guidelines for ozonesondes; (2) characterize uncertainties in various parts of the ECC conditioning process; and (3) compare original and reprocessed data to co-located ground and satellite measurements of column ozone.

  4. An educational review of the statistical issues in analysing utility data for cost-utility analysis.

    PubMed

    Hunter, Rachael Maree; Baio, Gianluca; Butt, Thomas; Morris, Stephen; Round, Jeff; Freemantle, Nick

    2015-04-01

    The aim of cost-utility analysis is to support decision making in healthcare by providing a standardised mechanism for comparing resource use and health outcomes across programmes of work. The focus of this paper is the denominator of the cost-utility analysis, specifically the methodology and statistical challenges associated with calculating QALYs from patient-level data collected as part of a trial. We provide a brief description of the most common questionnaire used to calculate patient level utility scores, the EQ-5D, followed by a discussion of other ways to calculate patient level utility scores alongside a trial including other generic measures of health-related quality of life and condition- and population-specific questionnaires. Detail is provided on how to calculate the mean QALYs per patient, including discounting, adjusting for baseline differences in utility scores and a discussion of the implications of different methods for handling missing data. The methods are demonstrated using data from a trial. As the methods chosen can systematically change the results of the analysis, it is important that standardised methods such as patient-level analysis are adhered to as best as possible. Regardless, researchers need to ensure that they are sufficiently transparent about the methods they use so as to provide the best possible information to aid in healthcare decision making. PMID:25595871

  5. Lightning NOx Statistics Derived by NASA Lightning Nitrogen Oxides Model (LNOM) Data Analyses

    NASA Technical Reports Server (NTRS)

    Koshak, William; Peterson, Harold

    2013-01-01

    What is the LNOM? The NASA Marshall Space Flight Center (MSFC) Lightning Nitrogen Oxides Model (LNOM) [Koshak et al., 2009, 2010, 2011; Koshak and Peterson 2011, 2013] analyzes VHF Lightning Mapping Array (LMA) and National Lightning Detection Network(TradeMark) (NLDN) data to estimate the lightning nitrogen oxides (LNOx) produced by individual flashes. Figure 1 provides an overview of LNOM functionality. Benefits of LNOM: (1) Does away with unrealistic "vertical stick" lightning channel models for estimating LNOx; (2) Uses ground-based VHF data that maps out the true channel in space and time to < 100 m accuracy; (3) Therefore, true channel segment height (ambient air density) is used to compute LNOx; (4) True channel length is used! (typically tens of kilometers since channel has many branches and "wiggles"); (5) Distinction between ground and cloud flashes are made; (6) For ground flashes, actual peak current from NLDN used to compute NOx from lightning return stroke; (7) NOx computed for several other lightning discharge processes (based on Cooray et al., 2009 theory): (a) Hot core of stepped leaders and dart leaders, (b) Corona sheath of stepped leader, (c) K-change, (d) Continuing Currents, and (e) M-components; and (8) LNOM statistics (see later) can be used to parameterize LNOx production for regional air quality models (like CMAQ), and for global chemical transport models (like GEOS-Chem).

  6. Statistical improvements in functional magnetic resonance imaging analyses produced by censoring high-motion data points.

    PubMed

    Siegel, Joshua S; Power, Jonathan D; Dubis, Joseph W; Vogel, Alecia C; Church, Jessica A; Schlaggar, Bradley L; Petersen, Steven E

    2014-05-01

    Subject motion degrades the quality of task functional magnetic resonance imaging (fMRI) data. Here, we test two classes of methods to counteract the effects of motion in task fMRI data: (1) a variety of motion regressions and (2) motion censoring ("motion scrubbing"). In motion regression, various regressors based on realignment estimates were included as nuisance regressors in general linear model (GLM) estimation. In motion censoring, volumes in which head motion exceeded a threshold were withheld from GLM estimation. The effects of each method were explored in several task fMRI data sets and compared using indicators of data quality and signal-to-noise ratio. Motion censoring decreased variance in parameter estimates within- and across-subjects, reduced residual error in GLM estimation, and increased the magnitude of statistical effects. Motion censoring performed better than all forms of motion regression and also performed well across a variety of parameter spaces, in GLMs with assumed or unassumed response shapes. We conclude that motion censoring improves the quality of task fMRI data and can be a valuable processing step in studies involving populations with even mild amounts of head movement.

  7. Comparisons between transect and fixed point in a oceanic turbulent flow: statistical analyses

    NASA Astrophysics Data System (ADS)

    Koziol, Lucie; Schmitt, Francois G.; Artigas, Felipe; Lizon, Fabrice

    2016-04-01

    Oceanological processes possess important fluctuations over large ranges of spatial and temporal scales. These fluctuations are related with the turbulence of the ocean. Usually, in turbulence, one considers fixed point Eulerian measurements, or Lagrangian measurements following an elements of fluid. On the other hand, in oceanography, measurements are often done from a boat operating over a transect, where the boat is moving in the medium at a fixed speed (relative to the flow). Here the aim of our study is to consider if such moving reference frame is modifying the statistics of the measurements. For this we compare two type of measurements at high frequency: fixed point measurements, and transect measurements, where the boat is moving at a fixed speed relative to the flow. 1 Hz fluorometer measurements are considered in both cases. Measurements have been done the same day, under similar conditions. Power spectra of time series are considered, as well as local mean and variance measurements along each transect. It is found that the spectral scaling slope of the measurement is not modified, but the variance is very different, being much larger for the moving frame. Such result needs theoretical understanding and has potential important consequence regarding the measurement that are done at high frequency on moving frames in oceanography.

  8. Statistical Improvements in Functional Magnetic Resonance Imaging Analyses Produced by Censoring High-Motion Data Points

    PubMed Central

    Siegel, Joshua S.; Power, Jonathan D.; Dubis, Joseph W.; Vogel, Alecia C.; Church, Jessica A.; Schlaggar, Bradley L.; Petersen, Steven E.

    2013-01-01

    Subject motion degrades the quality of task functional magnetic resonance imaging (fMRI) data. Here, we test two classes of methods to counteract the effects of motion in task fMRI data: (1) a variety of motion regressions and (2) motion censoring (“motion scrubbing”). In motion regression, various regressors based on realignment estimates were included as nuisance regressors in general linear model (GLM) estimation. In motion censoring, volumes in which head motion exceeded a threshold were withheld from GLM estimation. The effects of each method were explored in several task fMRI data sets and compared using indicators of data quality and signal-to-noise ratio. Motion censoring decreased variance in parameter estimates within- and across-subjects, reduced residual error in GLM estimation, and increased the magnitude of statistical effects. Motion censoring performed better than all forms of motion regression and also performed well across a variety of parameter spaces, in GLMs with assumed or unassumed response shapes. We conclude that motion censoring improves the quality of task fMRI data and can be a valuable processing step in studies involving populations with even mild amounts of head movement. PMID:23861343

  9. Accounting for undetected compounds in statistical analyses of mass spectrometry 'omic studies.

    PubMed

    Taylor, Sandra L; Leiserowitz, Gary S; Kim, Kyoungmi

    2013-12-01

    Mass spectrometry is an important high-throughput technique for profiling small molecular compounds in biological samples and is widely used to identify potential diagnostic and prognostic compounds associated with disease. Commonly, this data generated by mass spectrometry has many missing values resulting when a compound is absent from a sample or is present but at a concentration below the detection limit. Several strategies are available for statistically analyzing data with missing values. The accelerated failure time (AFT) model assumes all missing values result from censoring below a detection limit. Under a mixture model, missing values can result from a combination of censoring and the absence of a compound. We compare power and estimation of a mixture model to an AFT model. Based on simulated data, we found the AFT model to have greater power to detect differences in means and point mass proportions between groups. However, the AFT model yielded biased estimates with the bias increasing as the proportion of observations in the point mass increased while estimates were unbiased with the mixture model except if all missing observations came from censoring. These findings suggest using the AFT model for hypothesis testing and mixture model for estimation. We demonstrated this approach through application to glycomics data of serum samples from women with ovarian cancer and matched controls.

  10. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations, 1: Review and Comparison of Techniques

    SciTech Connect

    Kleijnen, J.P.C.; Helton, J.C.

    1999-03-24

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (i) Type I errors are unavoidable, (ii) Type 11errors can occur when inappropriate analysis procedures are used, (iii) physical explanations should always be sought for why statistical procedures identify variables as being important, and (iv) the identification of important variables tends to be stable for independent Latin hypercube samples.

  11. On statistical methods for analysing the geographical distribution of cancer cases near nuclear installations.

    PubMed

    Bithell, J F; Stone, R A

    1989-03-01

    There is great public concern, often based on anecdotal reports, about risks from ionising radiation. Recent interest has been directed at an excess of leukaemia cases in the locality of civil nuclear installations at Sellafield and Sizewell, and epidemiologists have a duty to pursue such information vigorously. This paper sets out to show that the epidemiological methods most commonly used can be improved upon. When analysing geographical data it is necessary to consider location. The most obvious quantification of location is ranked distance, though other measures which may be more meaningful in relation to aetiology may be substituted. A test based on distance ranks, the "Poisson maximum test", depends on the maximum of observed relative risk in regions of increasing size, but with significance level adjusted for selection. Applying this test to data from Sellafield and Sizewell shows that the excess of leukaemia incidence observed at Seascale, near Sellafield, is not an artefact due to data selection by region, and that the excess probably results from a genuine, if as yet unidentified cause (there being little evidence of any other locational association once the Seascale cases have been removed). So far as Sizewell is concerned, geographical proximity to the nuclear power station does not seem particularly important. PMID:2592896

  12. On statistical methods for analysing the geographical distribution of cancer cases near nuclear installations.

    PubMed Central

    Bithell, J F; Stone, R A

    1989-01-01

    There is great public concern, often based on anecdotal reports, about risks from ionising radiation. Recent interest has been directed at an excess of leukaemia cases in the locality of civil nuclear installations at Sellafield and Sizewell, and epidemiologists have a duty to pursue such information vigorously. This paper sets out to show that the epidemiological methods most commonly used can be improved upon. When analysing geographical data it is necessary to consider location. The most obvious quantification of location is ranked distance, though other measures which may be more meaningful in relation to aetiology may be substituted. A test based on distance ranks, the "Poisson maximum test", depends on the maximum of observed relative risk in regions of increasing size, but with significance level adjusted for selection. Applying this test to data from Sellafield and Sizewell shows that the excess of leukaemia incidence observed at Seascale, near Sellafield, is not an artefact due to data selection by region, and that the excess probably results from a genuine, if as yet unidentified cause (there being little evidence of any other locational association once the Seascale cases have been removed). So far as Sizewell is concerned, geographical proximity to the nuclear power station does not seem particularly important. PMID:2592896

  13. Statistical analyses of the background distribution of groundwater solutes, Los Alamos National Laboratory, New Mexico.

    SciTech Connect

    Longmire, Patrick A.; Goff, Fraser; Counce, D. A.; Ryti, R. T.; Dale, Michael R.; Britner, Kelly A

    2004-01-01

    Background or baseline water chemistry data and information are required to distingu ish between contaminated and non-contaminated waters for environmental investigations conducted at Los Alamos National Laboratory (referred to as the Laboratory). The term 'background' refers to natural waters discharged by springs or penetrated by wells that have not been contaminated by LANL or other municipal or industrial activities, and that are representative of groundwater discharging from their respective aquifer material. These investigations are conducted as part of the Environmental Restoration (ER) Project, Groundwater Protection Program (GWPP), Laboratory Surveillance Program, the Hydrogeologic Workplan, and the Site-Wide Environmental Impact Statement (SWEIS). This poster provides a comprehensive, validated database of inorganic, organic, stable isotope, and radionuclide analyses of up to 136 groundwater samples collected from 15 baseline springs and wells located in and around Los Alamos National Laboratory, New Mexico. The region considered in this investigation extends from the western edge of the Jemez Mountains eastward to the Rio Grande and from Frijoles Canyon northward to Garcia Canyon. Figure 1 shows the fifteen stations sampled for this investigation. The sampling stations and associated aquifer types are summarized in Table 1.

  14. Using Innovative Statistical Analyses to Assess Soil Degradation due to Land Use Change

    NASA Astrophysics Data System (ADS)

    Khaledian, Yones; Kiani, Farshad; Ebrahimi, Soheila; Brevik, Eric C.; Aitkenhead-Peterson, Jacqueline

    2016-04-01

    Soil erosion and overall loss of soil fertility is a serious issue for loess soils of the Golestan province, northern Iran. The assessment of soil degradation at large watershed scales is urgently required. This research investigated the role of land use change and its effect on soil degradation in cultivated, pasture and urban lands, when compared to native forest in terms of declines in soil fertility. Some novel statistical methods including partial least squares (PLS), principal component regression (PCR), and ordinary least squares regression (OLS) were used to predict soil cation-exchange capacity (CEC) using soil characteristics. PCA identified five primary components of soil quality. The PLS model was used to predict soil CEC from the soil characteristics including bulk density (BD), electrical conductivity (EC), pH, calcium carbonate equivalent (CCE), soil particle density (DS), mean weight diameter (MWD), soil porosity (F), organic carbon (OC), Labile carbon (LC), mineral carbon, saturation percentage (SP), soil particle size (clay, silt and sand), exchangeable cations (Ca2+, Mg2+, K+, Na+), and soil microbial respiration (SMR) collected in the Ziarat watershed. In order to evaluate the best fit, two other methods, PCR and OLS, were also examined. An exponential semivariogram using PLS predictions revealed stronger spatial dependence among CEC [r2 = 0.80, and RMSE= 1.99] than the other methods, PCR [r2 = 0.84, and RMSE= 2.45] and OLS [r2 = 0.84, and RMSE= 2.45]. Therefore, the PLS method provided the best model for the data. In stepwise regression analysis, MWD and LC were selected as influential variables in all soils, whereas the other influential parameters were different in various land uses. This study quantified reductions in numerous soil quality parameters resulting from extensive land-use changes and urbanization in the Ziarat watershed in Northern Iran.

  15. Performance of statistical methods for analysing survival data in the presence of non-random compliance.

    PubMed

    Odondi, Lang'o; McNamee, Roseanne

    2010-12-20

    Noncompliance often complicates estimation of treatment efficacy from randomized trials. Under random noncompliance, per protocol analyses or even simple regression adjustments for noncompliance, could be adequate for causal inference, but special methods are needed when noncompliance is related to risk. For survival data, Robins and Tsiatis introduced the semi-parametric structural Causal Accelerated Life Model (CALM) which allows time-dependent departures from randomized treatment in either arm and relates each observed event time to a potential event time that would have been observed if the control treatment had been given throughout the trial. Alternatively, Loeys and Goetghebeur developed a structural Proportional Hazards (C-Prophet) model for when there is all-or-nothing noncompliance in the treatment arm only. Whitebiet al. proposed a 'complier average causal effect' method for Proportional Hazards estimation which allows time-dependent departures from randomized treatment in the active arm. A time-invariant version of this estimator (CHARM) consists of a simple adjustment to the Intention-to-Treat hazard ratio estimate. We used simulation studies mimicking a randomized controlled trial of active treatment versus control with censored time-to-event data, and under both random and non-random time-dependent noncompliance, to evaluate performance of these methods in terms of 95 per cent confidence interval coverage, bias and root mean square errors (RMSE). All methods performed well in terms of bias, even the C-Prophet used after treating time-varying compliance as all-or-nothing. Coverage of the latter method, as implemented in Stata, was too low. The CALM method performed best in terms of bias and coverage but had the largest RMSE. PMID:20963732

  16. Revealing Facts and Avoiding Biases: A Review of Several Common Problems in Statistical Analyses of Epidemiological Data

    PubMed Central

    Yan, Lihan; Sun, Yongmin; Boivin, Michael R.; Kwon, Paul O.; Li, Yuanzhang

    2016-01-01

    This paper reviews several common challenges encountered in statistical analyses of epidemiological data for epidemiologists. We focus on the application of linear regression, multivariate logistic regression, and log-linear modeling to epidemiological data. Specific topics include: (a) deletion of outliers, (b) heteroscedasticity in linear regression, (c) limitations of principal component analysis in dimension reduction, (d) hazard ratio vs. odds ratio in a rate comparison analysis, (e) log-linear models with multiple response data, and (f) ordinal logistic vs. multinomial logistic models. As a general rule, a thorough examination of a model’s assumptions against both current data and prior research should precede its use in estimating effects. PMID:27774446

  17. Efficacy of lipase from Aspergillus niger as an additive in detergent formulations: a statistical approach.

    PubMed

    Saisubramanian, N; Edwinoliver, N G; Nandakumar, N; Kamini, N R; Puvanakrishnan, R

    2006-08-01

    The efficacy of lipase from Aspergillus niger MTCC 2594 as an additive in laundry detergent formulations was assessed using response surface methodology (RSM). A five-level four-factorial central composite design was chosen to explain the washing protocol with four critical factors, viz. detergent concentration, lipase concentration, buffer pH and washing temperature. The model suggested that all the factors chosen had a significant impact on oil removal and the optimal conditions for the removal of olive oil from cotton fabric were 1.0% detergent, 75 U of lipase, buffer pH of 9.5 and washing temperature of 25 degrees C. Under optimal conditions, the removal of olive oil from cotton fabric was 33 and 17.1% at 25 and 49 degrees C, respectively, in the presence of lipase over treatment with detergent alone. Hence, lipase from A. niger could be effectively used as an additive in detergent formulation for the removal of triglyceride soil both in cold and warm wash conditions.

  18. Statistical approaches to analyse patient-reported outcomes as response variables: an application to health-related quality of life.

    PubMed

    Arostegui, Inmaculada; Núñez-Antón, Vicente; Quintana, José M

    2012-04-01

    Patient-reported outcomes (PRO) are used as primary endpoints in medical research and their statistical analysis is an important methodological issue. Theoretical assumptions of the selected methodology and interpretation of its results are issues to take into account when selecting an appropriate statistical technique to analyse data. We present eight methods of analysis of a popular PRO tool under different assumptions that lead to different interpretations of the results. All methods were applied to responses obtained from two of the health dimensions of the SF-36 Health Survey. The proposed methods are: multiple linear regression (MLR), with least square and bootstrap estimations, tobit regression, ordinal logistic and probit regressions, beta-binomial regression (BBR), binomial-logit-normal regression (BLNR) and coarsening. Selection of an appropriate model depends not only on its distributional assumptions but also on the continuous or ordinal features of the response and the fact that they are constrained to a bounded interval. The BBR approach renders satisfactory results in a broad number of situations. MLR is not recommended, especially with skewed outcomes. Ordinal methods are only appropriate for outcomes with a few number of categories. Tobit regression is an acceptable option under normality assumptions and in the presence of moderate ceiling or floor effect. The BLNR and coarsening proposals are also acceptable, but only under certain distributional assumptions that are difficult to test a priori. Interpretation of the results is more convenient when using the BBR, BLNR and ordinal logistic regression approaches.

  19. Analyses of simulations of three-dimensional lattice proteins in comparison with a simplified statistical mechanical model of protein folding.

    PubMed

    Abe, H; Wako, H

    2006-07-01

    Folding and unfolding simulations of three-dimensional lattice proteins were analyzed using a simplified statistical mechanical model in which their amino acid sequences and native conformations were incorporated explicitly. Using this statistical mechanical model, under the assumption that only interactions between amino acid residues within a local structure in a native state are considered, the partition function of the system can be calculated for a given native conformation without any adjustable parameter. The simulations were carried out for two different native conformations, for each of which two foldable amino acid sequences were considered. The native and non-native contacts between amino acid residues occurring in the simulations were examined in detail and compared with the results derived from the theoretical model. The equilibrium thermodynamic quantities (free energy, enthalpy, entropy, and the probability of each amino acid residue being in the native state) at various temperatures obtained from the simulations and the theoretical model were also examined in order to characterize the folding processes that depend on the native conformations and the amino acid sequences. Finally, the free energy landscapes were discussed based on these analyses.

  20. Analysing factors related to slipping, stumbling, and falling accidents at work: Application of data mining methods to Finnish occupational accidents and diseases statistics database.

    PubMed

    Nenonen, Noora

    2013-03-01

    The utilisation of data mining methods has become common in many fields. In occupational accident analysis, however, these methods are still rarely exploited. This study applies methods of data mining (decision tree and association rules) to the Finnish national occupational accidents and diseases statistics database to analyse factors related to slipping, stumbling, and falling (SSF) accidents at work from 2006 to 2007. SSF accidents at work constitute a large proportion (22%) of all accidents at work in Finland. In addition, they are more likely to result in longer periods of incapacity for work than other workplace accidents. The most important factor influencing whether or not an accident at work is related to SSF is the specific physical activity of movement. In addition, the risk of SSF accidents at work seems to depend on the occupation and the age of the worker. The results were in line with previous research. Hence the application of data mining methods was considered successful. The results did not reveal anything unexpected though. Nevertheless, because of the capability to illustrate a large dataset and relationships between variables easily, data mining methods were seen as a useful supplementary method in analysing occupational accident data.

  1. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    ERIC Educational Resources Information Center

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  2. Application of both a physical theory and statistical procedure in the analyses of an in vivo study of aerosol deposition

    SciTech Connect

    Cheng, K.H.; Swift, D.L.; Yang, Y.H.

    1995-12-01

    Regional deposition of inhaled aerosols in the respiratory tract is a significant factor in assessing the biological effects from exposure to a variety of environmental particles. Understanding the deposition efficiency of inhaled aerosol particles in the nasal and oral airways can help evaluate doses to the extrathoracic region as well as to the lung. Dose extrapolation from laboratory animals to humans has been questioned due to significant physiological and anatomical variations. Although human studies are considered ideal for obtaining in vivo toxicity information important in risk assessment, the number of subjects in the study is often small compared to epidemiological and animal studies. This study measured in vivo the nasal airway dimensions and the extrathoracic deposition of ultrafine aerosols in 10 normal adult males. Variability among individuals was significant. The nasal geometry of each individual was characterized at a resolution of 3 mm using magnetic resonance imaging (MRI) and acoustic rhinometry (AR). The turbulent diffusion theory was used to describe the nonlinear nature of extrathoracic aerosol deposition. To determine what dimensional features of the nasal airway were responsible for the marked differences in particle deposition, the MIXed-effects NonLINear Regression (MIXNLIN) procedure was used to account for the random effort of repeated measurements on the same subject. Using both turbulent diffusion theory and MIXNLIN, the ultrafine particle deposition is correlated with nasal dimensions measured by the surface area, minimum cross-sectional area, and complexity of the airway shape. The combination of MRI and AR is useful for characterizing both detailed nasal dimensions and temporal changes in nasal patency. We conclude that a suitable statistical procedure incorporated with existing physical theories must be used in data analyses for experimental studies of aerosol deposition that involve a relatively small number of human subjects.

  3. Quantifying Trace Amounts of Aggregates in Biopharmaceuticals Using Analytical Ultracentrifugation Sedimentation Velocity: Bayesian Analyses and F Statistics.

    PubMed

    Wafer, Lucas; Kloczewiak, Marek; Luo, Yin

    2016-07-01

    Analytical ultracentrifugation-sedimentation velocity (AUC-SV) is often used to quantify high molar mass species (HMMS) present in biopharmaceuticals. Although these species are often present in trace quantities, they have received significant attention due to their potential immunogenicity. Commonly, AUC-SV data is analyzed as a diffusion-corrected, sedimentation coefficient distribution, or c(s), using SEDFIT to numerically solve Lamm-type equations. SEDFIT also utilizes maximum entropy or Tikhonov-Phillips regularization to further allow the user to determine relevant sample information, including the number of species present, their sedimentation coefficients, and their relative abundance. However, this methodology has several, often unstated, limitations, which may impact the final analysis of protein therapeutics. These include regularization-specific effects, artificial "ripple peaks," and spurious shifts in the sedimentation coefficients. In this investigation, we experimentally verified that an explicit Bayesian approach, as implemented in SEDFIT, can largely correct for these effects. Clear guidelines on how to implement this technique and interpret the resulting data, especially for samples containing micro-heterogeneity (e.g., differential glycosylation), are also provided. In addition, we demonstrated how the Bayesian approach can be combined with F statistics to draw more accurate conclusions and rigorously exclude artifactual peaks. Numerous examples with an antibody and an antibody-drug conjugate were used to illustrate the strengths and drawbacks of each technique.

  4. Statistical correlations and risk analyses techniques for a diving dual phase bubble model and data bank using massively parallel supercomputers.

    PubMed

    Wienke, B R; O'Leary, T R

    2008-05-01

    Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application. PMID:18371945

  5. Statistical correlations and risk analyses techniques for a diving dual phase bubble model and data bank using massively parallel supercomputers.

    PubMed

    Wienke, B R; O'Leary, T R

    2008-05-01

    Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.

  6. Statistical Analyses of d18O in Meteoric Waters From the Western US and East Asia: Implications for Paleoaltimetry

    NASA Astrophysics Data System (ADS)

    Lechler, A. R.; Niemi, N. A.

    2008-12-01

    Questions on the timing of Tibetan Plateau uplift and its associated influence on the development of the Indian and Asian monsoons are best addressed through accurate determinations of regional paleoelevation. Previous determinations of paleoaltimetry utilized the stable isotopic composition of paleo-meteoric waters as recorded in various proxies (authigenic minerals, fossils, etc.), in combination with empirically and model determined elevation isotopic lapse rates. However, the applicability of these lapse rates, derived principally from orogenic settings, to high continental plateaus remains uncertain. Our research aims to gain a better understanding of the potential controls on the δ18O composition of meteoric waters over continental plateaus through a principal component analysis (PCA) of modern waters from eastern Asia and the western US. In particular, we investigate how various environmental parameters (elevation, latitude, longitude, MAP, and MAT) influence the δ18O composition of these waters. First, these analyses reveal that elevation and latitude are the primary controls on isotopic composition in all regions investigated, as expected. Second, PCA results yield elevation lapse rates from orogenic settings (i.e. Sierra Nevada, Himalaya) of ~ -3‰/km, in strong agreement with both empirical and Rayleigh distillation model derived lapse rates. The Great Plains of the US, although not an orogenic setting, represents a monotonic topographic rise, and is also characterized by a ~ -3‰/km lapse rate. In high, arid plateau regions (Basin and Range, Tibet), however, elevation lapse rates are ~ -1.5‰/km, half that of orogenic settings. An empirically derived lapse rate from small source area springs collected over a 2 km elevation change from a single mountain range in the Basin and Range yields an identical rate. One clue as to the source of this lowered lapse rate is eastern China, which also displays an elevation lapse rate of ~ -1.5‰/km, despite

  7. Comparison of statistical inferences from the DerSimonian-Laird and alternative random-effects model meta-analyses - an empirical assessment of 920 Cochrane primary outcome meta-analyses.

    PubMed

    Thorlund, Kristian; Wetterslev, Jørn; Awad, Tahany; Thabane, Lehana; Gluud, Christian

    2011-12-01

    In random-effects model meta-analysis, the conventional DerSimonian-Laird (DL) estimator typically underestimates the between-trial variance. Alternative variance estimators have been proposed to address this bias. This study aims to empirically compare statistical inferences from random-effects model meta-analyses on the basis of the DL estimator and four alternative estimators, as well as distributional assumptions (normal distribution and t-distribution) about the pooled intervention effect. We evaluated the discrepancies of p-values, 95% confidence intervals (CIs) in statistically significant meta-analyses, and the degree (percentage) of statistical heterogeneity (e.g. I(2)) across 920 Cochrane primary outcome meta-analyses. In total, 414 of the 920 meta-analyses were statistically significant with the DL meta-analysis, and 506 were not. Compared with the DL estimator, the four alternative estimators yielded p-values and CIs that could be interpreted as discordant in up to 11.6% or 6% of the included meta-analyses pending whether a normal distribution or a t-distribution of the intervention effect estimates were assumed. Large discrepancies were observed for the measures of degree of heterogeneity when comparing DL with each of the four alternative estimators. Estimating the degree (percentage) of heterogeneity on the basis of less biased between-trial variance estimators seems preferable to current practice. Disclosing inferential sensitivity of p-values and CIs may also be necessary when borderline significant results have substantial impact on the conclusion. Copyright © 2012 John Wiley & Sons, Ltd.

  8. Analyses of microbial community within a composter operated using household garbage with special reference to the addition of soybean oil.

    PubMed

    Aoshima, M; Pedro, M S; Haruta, S; Ding, L; Fukada, T; Kigawa, A; Kodama, T; Ishii, M; Igarashi, Y

    2001-01-01

    A commercially available composter was operated using fixed composition of garbage with or without the addition of soybean oil. The composter was operated without adding seed microorganisms or bulking materials. Microflora within the composter were analyzed by denaturing gradient gel electrophoresis (DGGE) in the case of oil addition, or by 16/18 S rRNA gene sequencing of the isolated microorganisms in the case of no oil addition. The results showed that, irrespective of the addition of oil, the bacteria identified were all gram positive, and that lactobacilli seemed to be the key microorganisms. Based on the results, suitable microflora for use in a household composter are discussed.

  9. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    ERIC Educational Resources Information Center

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  10. RT-PCR and statistical analyses of adeABC expression in clinical isolates of Acinetobacter calcoaceticus-Acinetobacter baumannii complex.

    PubMed

    Ruzin, Alexey; Immermann, Frederick W; Bradford, Patricia A

    2010-06-01

    The relationship between expression of adeABC and minimal inhibitory concentration (MIC) of tigecycline was investigated by RT-PCR and statistical analyses in a population of 106 clinical isolates (MIC range, 0.0313-16 microg/ml) of Acinetobacter calcoaceticus-Acinetobacter baumannii complex. There was a statistically significant linear relationship (p < 0.0001) between log-transformed expression values and log-transformed MIC values, indicating that overexpression of AdeABC efflux pump is a prevalent mechanism for decreased susceptibility to tigecycline in A. calcoaceticus-A. baumannii complex.

  11. Biochemical analyses of the antioxidative activity and chemical ingredients in eight different Allium alien monosomic addition lines.

    PubMed

    Yaguchi, Shigenori; Matsumoto, Misato; Date, Rie; Harada, Kazuki; Maeda, Toshimichi; Yamauchi, Naoki; Shigyo, Masayoshi

    2013-01-01

    We measured the antioxidant contents and antioxidative activities in eight Allium fistulosum-shallot monosomic addition lines (MAL; FF+1A-FF+8A). The high antioxidative activity lines (FF+2A and FF+6A) showed high polyphenol accumulation. These additional chromosomes (2A and 6A) would therefore have anonymous genes related to the upregulation of polyphenol production, the antioxidative activities consequently being increased in these MALs. PMID:24317054

  12. Guided waves based SHM systems for composites structural elements: statistical analyses finalized at probability of detection definition and assessment

    NASA Astrophysics Data System (ADS)

    Monaco, E.; Memmolo, V.; Ricci, F.; Boffa, N. D.; Maio, L.

    2015-03-01

    Maintenance approaches based on sensorised structures and Structural Health Monitoring systems could represent one of the most promising innovations in the fields of aerostructures since many years, mostly when composites materials (fibers reinforced resins) are considered. Layered materials still suffer today of drastic reductions of maximum allowable stress values during the design phase as well as of costly and recurrent inspections during the life cycle phase that don't permit of completely exploit their structural and economic potentialities in today aircrafts. Those penalizing measures are necessary mainly to consider the presence of undetected hidden flaws within the layered sequence (delaminations) or in bonded areas (partial disbonding); in order to relax design and maintenance constraints a system based on sensors permanently installed on the structure to detect and locate eventual flaws can be considered (SHM system) once its effectiveness and reliability will be statistically demonstrated via a rigorous Probability Of Detection function definition and evaluation. This paper presents an experimental approach with a statistical procedure for the evaluation of detection threshold of a guided waves based SHM system oriented to delaminations detection on a typical wing composite layered panel. The experimental tests are mostly oriented to characterize the statistical distribution of measurements and damage metrics as well as to characterize the system detection capability using this approach. Numerically it is not possible to substitute part of the experimental tests aimed at POD where the noise in the system response is crucial. Results of experiments are presented in the paper and analyzed.

  13. Statistical analyses of the results of 25 years of beach litter surveys on the south-eastern North Sea coast.

    PubMed

    Schulz, Marcus; Clemens, Thomas; Förster, Harald; Harder, Thorsten; Fleet, David; Gaus, Silvia; Grave, Christel; Flegel, Imme; Schrey, Eckart; Hartwig, Eike

    2015-08-01

    In the North Sea, the amount of litter present in the marine environment represents a severe environmental problem. In order to assess the magnitude of the problem and measure changes in abundance, the results of two beach litter monitoring programmes were compared and analysed for long-term trends applying multivariate techniques. Total beach litter pollution was persistently high. Spatial differences in litter abundance made it difficult to identify long-term trends: Partly more than 8000 litter items year(-1) were recorded on a 100 m long survey site on the island of Scharhörn, while the survey site on the beach on the island of Amrum revealed abundances lower by two orders of magnitude. Beach litter was dominated by plastic with mean proportions of 52%-91% of total beach litter. Non-parametric time series analyses detected many significant trends, which, however, did not show any systematic spatial patterns. Cluster analyses partly led to groupings of beaches according to their expositions to sources of litter, wind and currents. Surveys in short intervals of one to two weeks were found to give higher annual sums of beach litter than the quarterly surveys of the OSPAR method. Surveys at regular intervals of four weeks to five months would make monitoring results more reliable.

  14. Statistical analyses of the results of 25 years of beach litter surveys on the south-eastern North Sea coast.

    PubMed

    Schulz, Marcus; Clemens, Thomas; Förster, Harald; Harder, Thorsten; Fleet, David; Gaus, Silvia; Grave, Christel; Flegel, Imme; Schrey, Eckart; Hartwig, Eike

    2015-08-01

    In the North Sea, the amount of litter present in the marine environment represents a severe environmental problem. In order to assess the magnitude of the problem and measure changes in abundance, the results of two beach litter monitoring programmes were compared and analysed for long-term trends applying multivariate techniques. Total beach litter pollution was persistently high. Spatial differences in litter abundance made it difficult to identify long-term trends: Partly more than 8000 litter items year(-1) were recorded on a 100 m long survey site on the island of Scharhörn, while the survey site on the beach on the island of Amrum revealed abundances lower by two orders of magnitude. Beach litter was dominated by plastic with mean proportions of 52%-91% of total beach litter. Non-parametric time series analyses detected many significant trends, which, however, did not show any systematic spatial patterns. Cluster analyses partly led to groupings of beaches according to their expositions to sources of litter, wind and currents. Surveys in short intervals of one to two weeks were found to give higher annual sums of beach litter than the quarterly surveys of the OSPAR method. Surveys at regular intervals of four weeks to five months would make monitoring results more reliable. PMID:26026589

  15. Combined Statistical Analyses of Peptide Intensities and Peptide Occurrences Improves Identification of Significant Peptides from MS-based Proteomics Data

    SciTech Connect

    Webb-Robertson, Bobbie-Jo M.; McCue, Lee Ann; Waters, Katrina M.; Matzke, Melissa M.; Jacobs, Jon M.; Metz, Thomas O.; Varnum, Susan M.; Pounds, Joel G.

    2010-11-01

    Liquid chromatography-mass spectrometry-based (LC-MS) proteomics uses peak intensities of proteolytic peptides to infer the differential abundance of peptides/proteins. However, substantial run-to-run variability in peptide intensities and observations (presence/absence) of peptides makes data analysis quite challenging. The missing abundance values in LC-MS proteomics data are difficult to address with traditional imputation-based approaches because the mechanisms by which data are missing are unknown a priori. Data can be missing due to random mechanisms such as experimental error, or non-random mechanisms such as a true biological effect. We present a statistical approach that uses a test of independence known as a G-test to test the null hypothesis of independence between the number of missing values and the experimental groups. We pair the G-test results evaluating independence of missing data (IMD) with a standard analysis of variance (ANOVA) that uses only means and variances computed from the observed data. Each peptide is therefore represented by two statistical confidence metrics, one for qualitative differential observation and one for quantitative differential intensity. We use two simulated and two real LC-MS datasets to demonstrate the robustness and sensitivity of the ANOVA-IMD approach for assigning confidence to peptides with significant differential abundance among experimental groups.

  16. Literature review of some selected types of results and statistical analyses of total-ozone data. [for the ozonosphere

    NASA Technical Reports Server (NTRS)

    Myers, R. H.

    1976-01-01

    The depletion of ozone in the stratosphere is examined, and causes for the depletion are cited. Ground station and satellite measurements of ozone, which are taken on a worldwide basis, are discussed. Instruments used in ozone measurement are discussed, such as the Dobson spectrophotometer, which is credited with providing the longest and most extensive series of observations for ground based observation of stratospheric ozone. Other ground based instruments used to measure ozone are also discussed. The statistical differences of ground based measurements of ozone from these different instruments are compared to each other, and to satellite measurements. Mathematical methods (i.e., trend analysis or linear regression analysis) of analyzing the variability of ozone concentration with respect to time and lattitude are described. Various time series models which can be employed in accounting for ozone concentration variability are examined.

  17. The relationship between visual analysis and five statistical analyses in a simple AB single-case research design.

    PubMed

    Brossart, Daniel F; Parker, Richard I; Olson, Elizabeth A; Mahadevan, Lakshmi

    2006-09-01

    This study explored some practical issues for single-case researchers who rely on visual analysis of graphed data, but who also may consider supplemental use of promising statistical analysis techniques. The study sought to answer three major questions: (a) What is a typical range of effect sizes from these analytic techniques for data from "effective interventions"? (b) How closely do results from these same analytic techniques concur with visual-analysis-based judgments of effective interventions? and (c) What role does autocorrelation play in interpretation of these analytic results? To answer these questions, five analytic techniques were compared with the judgments of 45 doctoral students and faculty, who rated intervention effectiveness from visual analysis of 35 fabricated AB design graphs. Implications for researchers and practitioners using single-case designs are discussed.

  18. Does bisphenol A induce superfeminization in Marisa cornuarietis? Part II: toxicity test results and requirements for statistical power analyses.

    PubMed

    Forbes, Valery E; Aufderheide, John; Warbritton, Ryan; van der Hoeven, Nelly; Caspers, Norbert

    2007-03-01

    This study presents results of the effects of bisphenol A (BPA) on adult egg production, egg hatchability, egg development rates and juvenile growth rates in the freshwater gastropod, Marisa cornuarietis. We observed no adult mortality, substantial inter-snail variability in reproductive output, and no effects of BPA on reproduction during 12 weeks of exposure to 0, 0.1, 1.0, 16, 160 or 640 microg/L BPA. We observed no effects of BPA on egg hatchability or timing of egg hatching. Juveniles showed good growth in the control and all treatments, and there were no significant effects of BPA on this endpoint. Our results do not support previous claims of enhanced reproduction in Marisa cornuarietis in response to exposure to BPA. Statistical power analysis indicated high levels of inter-snail variability in the measured endpoints and highlighted the need for sufficient replication when testing treatment effects on reproduction in M. cornuarietis with adequate power.

  19. Source apportionment of groundwater pollutants in Apulian agricultural sites using multivariate statistical analyses: case study of Foggia province

    PubMed Central

    2012-01-01

    Background Ground waters are an important resource of water supply for human health and activities. Groundwater uses and applications are often related to its composition, which is increasingly influenced by human activities. In fact the water quality of groundwater is affected by many factors including precipitation, surface runoff, groundwater flow, and the characteristics of the catchment area. During the years 2004-2007 the Agricultural and Food Authority of Apulia Region has implemented the project “Expansion of regional agro-meteorological network” in order to assess, monitor and manage of regional groundwater quality. The total wells monitored during this activity amounted to 473, and the water samples analyzed were 1021. This resulted in a huge and complex data matrix comprised of a large number of physical-chemical parameters, which are often difficult to interpret and draw meaningful conclusions. The application of different multivariate statistical techniques such as Cluster Analysis (CA), Principal Component Analysis (PCA), Absolute Principal Component Scores (APCS) for interpretation of the complex databases offers a better understanding of water quality in the study region. Results Form results obtained by Principal Component and Cluster Analysis applied to data set of Foggia province it’s evident that some sampling sites investigated show dissimilarities, mostly due to the location of the site, the land use and management techniques and groundwater overuse. By APCS method it’s been possible to identify three pollutant sources: Agricultural pollution 1 due to fertilizer applications, Agricultural pollution 2 due to microelements for agriculture and groundwater overuse and a third source that can be identified as soil run off and rock tracer mining. Conclusions Multivariate statistical methods represent a valid tool to understand complex nature of groundwater quality issues, determine priorities in the use of ground waters as irrigation water

  20. Instrumental and multivariate statistical analyses for the characterisation of the geographical origin of Apulian virgin olive oils.

    PubMed

    Longobardi, F; Ventrella, A; Casiello, G; Sacco, D; Catucci, L; Agostiano, A; Kontominas, M G

    2012-07-15

    In this paper, virgin olive oils (VOOs) coming from three different geographic origins of Apulia, were analysed for free acidity, peroxide value, spectrophotometric indexes, chlorophyll content, sterol, fatty acid, and triacylglycerol compositions. In order to predict the geographical origin of VOOs, different multivariate approaches were applied. By performing principal component analysis (PCA) a modest natural grouping of the VOOs was observed on the basis of their origin, and consequently three supervised techniques, i.e., general discriminant analysis (GDA), partial least squares-discriminant analysis (PLS-DA) and soft independent modelling of class analogy (SIMCA) were used and the results were compared. In particular, the best prediction ability was produced by applying GDA (average prediction ability of 82.5%), even if interesting results were obtained also by applying the other two classification techniques, i.e., 77.2% and 75.5% for PLS-DA and SIMCA, respectively.

  1. Identification of indicator congeners and evaluation of emission pattern of polychlorinated naphthalenes in industrial stack gas emissions by statistical analyses.

    PubMed

    Liu, Guorui; Cai, Zongwei; Zheng, Minghui; Jiang, Xiaoxu; Nie, Zhiqiang; Wang, Mei

    2015-01-01

    Identifying marker congeners of unintentionally produced polychlorinated naphthalenes (PCNs) from industrial thermal sources might be useful for predicting total PCN (∑2-8PCN) emissions by the determination of only indicator congeners. In this study, potential indicator congeners were identified based on the PCN data in 122 stack gas samples from over 60 plants involved in more than ten industrial thermal sources reported in our previous case studies. Linear regression analyses identified that the concentrations of CN27/30, CN52/60, and CN66/67 correlated significantly with ∑2-8PCN (R(2)=0.77, 0.80, and 0.58, respectively; n=122, p<0.05), which might be good candidates for indicator congeners. Equations describing relationships between indicators and ∑2-8PCN were established. The linear regression analyses involving 122 samples showed that the relationships between the indicator congeners and ∑2-8PCN were not significantly affected by factors such as industry types, raw materials used, or operating conditions. Hierarchical cluster analysis and similarity calculations for the 122 stack gas samples were adopted to group those samples and evaluating their similarity and difference based on the PCN homolog distributions from different industrial thermal sources. Generally, the fractions of less chlorinated homologs comprised of di-, tri-, and tetra-homologs were much higher than that of more chlorinated homologs for up to 111 stack gas samples contained in group 1 and 2, which indicating the dominance of lower chlorinated homologs in stack gas from industrial thermal sources.

  2. DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTIC ASSESSMENT

    EPA Science Inventory

    Databases designed for statistical analyses have characteristics that distinguish them from databases intended for general use. EMAP uses a probabilistic sampling design to collect data to produce statistical assessments of environmental conditions. In addition to supporting the ...

  3. Hydrogeochemical Processes of Groundwater Using Multivariate Statistical Analyses and Inverse Geochemical Modeling in Samrak Park of Nakdong River Basin, Korea

    NASA Astrophysics Data System (ADS)

    Chung, Sang Yong

    2015-04-01

    Multivariate statistical methods and inverse geochemical modelling were used to assess the hydrogeochemical processes of groundwater in Nakdong River basin. The study area is located in a part of Nakdong River basin, the Busan Metropolitan City, Kora. Quaternary deposits forms Samrak Park region and are underlain by intrusive rocks of Bulkuksa group and sedimentary rocks of Yucheon group in the Cretaceous Period. The Samrak park region is acting as two aquifer systems of unconfined aquifer and confined aquifer. The unconfined aquifer consists of upper sand, and confined aquifer is comprised of clay, lower sand, gravel, weathered rock. Porosity and hydraulic conductivity of the area is 37 to 59% and 1.7 to 200m/day, respectively. Depth of the wells ranges from 9 to 77m. Piper's trilinear diagram, CaCl2 type was useful for unconfined aquifer and NaCl type was dominant for confined aquifer. By hierarchical cluster analysis (HCA), Group 1 and Group 2 are fully composed of unconfined aquifer and confined aquifer, respectively. In factor analysis (FA), Factor 1 is described by the strong loadings of EC, Na, K, Ca, Mg, Cl, HCO3, SO4 and Si, and Factor 2 represents the strong loadings of pH and Al. Base on the Gibbs diagram, the unconfined and confined aquifer samples are scattered discretely in the rock and evaporation areas. The principal hydrogeochemical processes occurring in the confined and unconfined aquifers are the ion exchange due to the phenomena of freshening under natural recharge and water-rock interactions followed by evaporation and dissolution. The saturation index of minerals such as Ca-montmorillonite, dolomite and calcite represents oversaturated, and the albite, gypsum and halite show undersaturated. Inverse geochemical modeling using PHREEQC code demonstrated that relatively few phases were required to derive the differences in groundwater chemistry along the flow path in the area. It also suggested that dissolution of carbonate and ion exchange

  4. Statistical Analyses of Satellite Cloud Object Data From CERES. Part 4; Boundary-layer Cloud Objects During 1998 El Nino

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man; Wong, Takmeng; Wielicki, Bruce A.; Parker, Lindsay

    2006-01-01

    Three boundary-layer cloud object types, stratus, stratocumulus and cumulus, that occurred over the Pacific Ocean during January-August 1998, are identified from the CERES (Clouds and the Earth s Radiant Energy System) single scanner footprint (SSF) data from the TRMM (Tropical Rainfall Measuring Mission) satellite. This study emphasizes the differences and similarities in the characteristics of each cloud-object type between the tropical and subtropical regions and among different size categories and among small geographic areas. Both the frequencies of occurrence and statistical distributions of cloud physical properties are analyzed. In terms of frequencies of occurrence, stratocumulus clouds dominate the entire boundary layer cloud population in all regions and among all size categories. Stratus clouds are more prevalent in the subtropics and near the coastal regions, while cumulus clouds are relatively prevalent over open ocean and the equatorial regions, particularly, within the small size categories. The largest size category of stratus cloud objects occurs more frequently in the subtropics than in the tropics and has much larger average size than its cumulus and stratocumulus counterparts. Each of the three cloud object types exhibits small differences in statistical distributions of cloud optical depth, liquid water path, TOA albedo and perhaps cloud-top height, but large differences in those of cloud-top temperature and OLR between the tropics and subtropics. Differences in the sea surface temperature (SST) distributions between the tropics and subtropics influence some of the cloud macrophysical properties, but cloud microphysical properties and albedo for each cloud object type are likely determined by (local) boundary-layer dynamics and structures. Systematic variations of cloud optical depth, TOA albedo, cloud-top height, OLR and SST with cloud object sizes are pronounced for the stratocumulus and stratus types, which are related to systematic

  5. Water quality variation in the highly disturbed Huai River Basin, China from 1994 to 2005 by multi-statistical analyses.

    PubMed

    Zhai, Xiaoyan; Xia, Jun; Zhang, Yongyong

    2014-10-15

    Water quality deterioration is a prominent issue threatening water security throughout the world. Huai River Basin, as the sixth largest basin in China, is facing the most severe water pollution and high disturbance. Statistical detection of water quality trends and identification of human interferences are significant for sustainable water quality management. Three key water quality elements (ammonium nitrogen: NH3-N, permanganate index: CODMn and dissolved oxygen: DO) at 18 monitoring stations were selected to analyze their spatio-temporal variations in the highly disturbed Huai River Basin using seasonal Mann-Kendall test and Moran's I method. Relationship between surrounding water environment and anthropogenic activities (point source emission, land use) was investigated by regression analysis. The results indicated that water environment was significantly improved on the whole from 1994 to 2005. CODMn and NH3-N concentrations decreased at half of the stations, and DO concentration increased significantly at 39% (7/18) stations. The high pollution cluster centers for both NH3-N and CODMn were in the middle stream of Shaying River and Guo River in the 2000s. Water quality of Huai River Basin was mainly influenced by point source pollution emission, flows regulated by dams, water temperature and land use variations and so on. This study was expected to provide insights into water quality evolution and foundations for water quality management in Huai River Basin, and scientific references for the implementation of water pollution prevention in China.

  6. Extended Statistical Short-Range Guidance for Peak Wind Speed Analyses at the Shuttle Landing Facility: Phase II Results

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred C.

    2003-01-01

    This report describes the results from Phase II of the AMU's Short-Range Statistical Forecasting task for peak winds at the Shuttle Landing Facility (SLF). The peak wind speeds are an important forecast element for the Space Shuttle and Expendable Launch Vehicle programs. The 45th Weather Squadron and the Spaceflight Meteorology Group indicate that peak winds are challenging to forecast. The Applied Meteorology Unit was tasked to develop tools that aid in short-range forecasts of peak winds at tower sites of operational interest. A seven year record of wind tower data was used in the analysis. Hourly and directional climatologies by tower and month were developed to determine the seasonal behavior of the average and peak winds. Probability density functions (PDF) of peak wind speed were calculated to determine the distribution of peak speed with average speed. These provide forecasters with a means of determining the probability of meeting or exceeding a certain peak wind given an observed or forecast average speed. A PC-based Graphical User Interface (GUI) tool was created to display the data quickly.

  7. Statistical and molecular analyses of evolutionary significance of red-green color vision and color blindness in vertebrates.

    PubMed

    Yokoyama, Shozo; Takenaka, Naomi

    2005-04-01

    Red-green color vision is strongly suspected to enhance the survival of its possessors. Despite being red-green color blind, however, many species have successfully competed in nature, which brings into question the evolutionary advantage of achieving red-green color vision. Here, we propose a new method of identifying positive selection at individual amino acid sites with the premise that if positive Darwinian selection has driven the evolution of the protein under consideration, then it should be found mostly at the branches in the phylogenetic tree where its function had changed. The statistical and molecular methods have been applied to 29 visual pigments with the wavelengths of maximal absorption at approximately 510-540 nm (green- or middle wavelength-sensitive [MWS] pigments) and at approximately 560 nm (red- or long wavelength-sensitive [LWS] pigments), which are sampled from a diverse range of vertebrate species. The results show that the MWS pigments are positively selected through amino acid replacements S180A, Y277F, and T285A and that the LWS pigments have been subjected to strong evolutionary conservation. The fact that these positively selected M/LWS pigments are found not only in animals with red-green color vision but also in those with red-green color blindness strongly suggests that both red-green color vision and color blindness have undergone adaptive evolution independently in different species.

  8. Combining the Power of Statistical Analyses and Community Interviews to Identify Adoption Barriers for Stormwater Best-Management Practices

    NASA Astrophysics Data System (ADS)

    Hoover, F. A.; Bowling, L. C.; Prokopy, L. S.

    2015-12-01

    Urban stormwater is an on-going management concern in municipalities of all sizes. In both combined or separated sewer systems, pollutants from stormwater runoff enter the natural waterway system during heavy rain events. Urban flooding during frequent and more intense storms are also a growing concern. Therefore, stormwater best-management practices (BMPs) are being implemented in efforts to reduce and manage stormwater pollution and overflow. The majority of BMP water quality studies focus on the small-scale, individual effects of the BMP, and the change in water quality directly from the runoff of these infrastructures. At the watershed scale, it is difficult to establish statistically whether or not these BMPs are making a difference in water quality, given that watershed scale monitoring is often costly and time consuming, relying on significant sources of funds, which a city may not have. Hence, there is a need to quantify the level of sampling needed to detect the water quality impact of BMPs at the watershed scale. In this study, a power analysis was performed on data from an urban watershed in Lafayette, Indiana, to determine the frequency of sampling required to detect a significant change in water quality measurements. Using the R platform, results indicate that detecting a significant change in watershed level water quality would require hundreds of weekly measurements, even when improvement is present. The second part of this study investigates whether the difficulty in demonstrating water quality change represents a barrier to adoption of stormwater BMPs. Semi-structured interviews of community residents and organizations in Chicago, IL are being used to investigate residents understanding of water quality and best management practices and identify their attitudes and perceptions towards stormwater BMPs. Second round interviews will examine how information on uncertainty in water quality improvements influences their BMP attitudes and perceptions.

  9. Statistical analyses in Swedish randomised trials on mammography screening and in other randomised trials on cancer screening: a systematic review

    PubMed Central

    Boniol, Mathieu; Smans, Michel; Sullivan, Richard; Boyle, Peter

    2015-01-01

    Objectives We compared calculations of relative risks of cancer death in Swedish mammography trials and in other cancer screening trials. Participants Men and women from 30 to 74 years of age. Setting Randomised trials on cancer screening. Design For each trial, we identified the intervention period, when screening was offered to screening groups and not to control groups, and the post-intervention period, when screening (or absence of screening) was the same in screening and control groups. We then examined which cancer deaths had been used for the computation of relative risk of cancer death. Main outcome measures Relative risk of cancer death. Results In 17 non-breast screening trials, deaths due to cancers diagnosed during the intervention and post-intervention periods were used for relative risk calculations. In the five Swedish trials, relative risk calculations used deaths due to breast cancers found during intervention periods, but deaths due to breast cancer found at first screening of control groups were added to these groups. After reallocation of the added breast cancer deaths to post-intervention periods of control groups, relative risks of 0.86 (0.76; 0.97) were obtained for cancers found during intervention periods and 0.83 (0.71; 0.97) for cancers found during post-intervention periods, indicating constant reduction in the risk of breast cancer death during follow-up, irrespective of screening. Conclusions The use of unconventional statistical methods in Swedish trials has led to overestimation of risk reduction in breast cancer death attributable to mammography screening. The constant risk reduction observed in screening groups was probably due to the trial design that optimised awareness and medical management of women allocated to screening groups. PMID:26152677

  10. Testing for Additivity in Chemical Mixtures Using a Fixed-Ratio Ray Design and Statistical Equivalence Testing Methods

    EPA Science Inventory

    Fixed-ratio ray designs have been used for detecting and characterizing interactions of large numbers of chemicals in combination. Single chemical dose-response data are used to predict an “additivity curve” along an environmentally relevant ray. A “mixture curve” is estimated fr...

  11. CT-Based Attenuation Correction in Brain SPECT/CT Can Improve the Lesion Detectability of Voxel-Based Statistical Analyses

    PubMed Central

    Kato, Hiroki; Shimosegawa, Eku; Fujino, Koichi; Hatazawa, Jun

    2016-01-01

    Background Integrated SPECT/CT enables non-uniform attenuation correction (AC) using built-in CT instead of the conventional uniform AC. The effect of CT-based AC on voxel-based statistical analyses of brain SPECT findings has not yet been clarified. Here, we assessed differences in the detectability of regional cerebral blood flow (CBF) reduction using SPECT voxel-based statistical analyses based on the two types of AC methods. Subjects and Methods N-isopropyl-p-[123I]iodoamphetamine (IMP) CBF SPECT images were acquired for all the subjects and were reconstructed using 3D-OSEM with two different AC methods: Chang’s method (Chang’s AC) and the CT-based AC method. A normal database was constructed for the analysis using SPECT findings obtained for 25 healthy normal volunteers. Voxel-based Z-statistics were also calculated for SPECT findings obtained for 15 patients with chronic cerebral infarctions and 10 normal subjects. We assumed that an analysis with a higher specificity would likely produce a lower mean absolute Z-score for normal brain tissue, and a more sensitive voxel-based statistical analysis would likely produce a higher absolute Z-score for in old infarct lesions, where the CBF was severely decreased. Results The inter-subject variation in the voxel values in the normal database was lower using CT-based AC, compared with Chang’s AC, for most of the brain regions. The absolute Z-score indicating a SPECT count reduction in infarct lesions was also significantly higher in the images reconstructed using CT-based AC, compared with Chang’s AC (P = 0.003). The mean absolute value of the Z-score in the 10 intact brains was significantly lower in the images reconstructed using CT-based AC than in those reconstructed using Chang’s AC (P = 0.005). Conclusions Non-uniform CT-based AC by integrated SPECT/CT significantly improved sensitivity and the specificity of the voxel-based statistical analyses for regional SPECT count reductions, compared with

  12. The interpretation of long-term trials of biologic treatments for psoriasis: trial designs and the choices of statistical analyses affect ability to compare outcomes across trials.

    PubMed

    Langley, R G; Reich, K

    2013-12-01

    Psoriasis is a chronic disease requiring long-term therapy, which makes finding treatments with favourable long-term safety and efficacy profiles crucial. The goal of this review is to provide the background needed to evaluate properly long-term studies of biologic treatments for psoriasis. Firstly, important elements of design and analysis strategies are described. Secondly, data from published trials of biologic therapies for psoriasis are reviewed in light of the design and analysis choices implemented in the studies. Published reports of clinical trials of biologic treatments (adalimumab, alefacept, etanercept, infliximab or ustekinumab) that lasted 33 weeks or longer and included efficacy results and statistical analysis were reviewed. Study designs and statistical analyses were evaluated and summarized, emphasizing patient follow-up methods and handling of missing data. Various trial designs and data handling methods are used in long-term studies of biologic psoriasis treatments. Responder analyses in long-term trials can be conducted in responder enrichment, re-treated nonresponder or intent-to-treat trials. Missing data can be handled in four ways, including, from most to least conservative, nonresponder imputation, last-observation-carried-forward, as-observed analysis and anytime analysis. Long-term clinical trials have shown that adalimumab, alefacept, etanercept, infliximab and ustekinumab are efficacious for psoriasis treatment; however, without common standards for these trials, direct comparisons of these agents are difficult. Understanding differences in trial design and data handling is essential to make informed treatment decisions.

  13. The interpretation of long-term trials of biologic treatments for psoriasis: trial designs and the choices of statistical analyses affect ability to compare outcomes across trials.

    PubMed

    Langley, R G; Reich, K

    2013-12-01

    Psoriasis is a chronic disease requiring long-term therapy, which makes finding treatments with favourable long-term safety and efficacy profiles crucial. The goal of this review is to provide the background needed to evaluate properly long-term studies of biologic treatments for psoriasis. Firstly, important elements of design and analysis strategies are described. Secondly, data from published trials of biologic therapies for psoriasis are reviewed in light of the design and analysis choices implemented in the studies. Published reports of clinical trials of biologic treatments (adalimumab, alefacept, etanercept, infliximab or ustekinumab) that lasted 33 weeks or longer and included efficacy results and statistical analysis were reviewed. Study designs and statistical analyses were evaluated and summarized, emphasizing patient follow-up methods and handling of missing data. Various trial designs and data handling methods are used in long-term studies of biologic psoriasis treatments. Responder analyses in long-term trials can be conducted in responder enrichment, re-treated nonresponder or intent-to-treat trials. Missing data can be handled in four ways, including, from most to least conservative, nonresponder imputation, last-observation-carried-forward, as-observed analysis and anytime analysis. Long-term clinical trials have shown that adalimumab, alefacept, etanercept, infliximab and ustekinumab are efficacious for psoriasis treatment; however, without common standards for these trials, direct comparisons of these agents are difficult. Understanding differences in trial design and data handling is essential to make informed treatment decisions. PMID:23937204

  14. Synthesis of Aza-m-Xylylene diradicals with large singlet-triplet energy gap and statistical analyses of their EPR spectra

    SciTech Connect

    Olankitwanit, Arnon; Pink, Maren; Rajca, Suchada; Rajca, Andrzej

    2014-10-08

    We describe synthesis and characterization of a derivative of aza-m-xylylene, diradical 2, that is persistent in solution at room temperature with the half-life measured in minutes (~80–250 s) and in which the triplet ground state is below the lowest singlet state by >10 kcal mol⁻¹. The triplet ground states and ΔEST of 2 in glassy solvent matrix are determined by a new approach based on statistical analyses of their EPR spectra. Characterization and analysis of the analogous diradical 1 are carried out for comparison. Statistical analyses of their EPR spectra reliably provide improved lower bounds for ΔEST (from >0.4 to >0.6 kcal mol⁻¹) and are compatible with a wide range of relative contents of diradical vs monoradical, including samples in which the diradical and monoradical are minor and major components, respectively. This demonstrates a new powerful method for the determination of the triplet ground states and ΔEST applicable to moderately pure diradicals in matrices.

  15. The Monte Carlo method as a tool for statistical characterisation of differential and additive phase shifting algorithms

    NASA Astrophysics Data System (ADS)

    Miranda, M.; Dorrío, B. V.; Blanco, J.; Diz-Bugarín, J.; Ribas, F.

    2011-01-01

    Several metrological applications base their measurement principle in the phase sum or difference between two patterns, one original s(r,phi) and another modified t(r,phi+Δphi). Additive or differential phase shifting algorithms directly recover the sum 2phi+Δphi or the difference Δphi of phases without requiring prior calculation of the individual phases. These algorithms can be constructed, for example, from a suitable combination of known phase shifting algorithms. Little has been written on the design, analysis and error compensation of these new two-stage algorithms. Previously we have used computer simulation to study, in a linear approach or with a filter process in reciprocal space, the response of several families of them to the main error sources. In this work we present an error analysis that uses Monte Carlo simulation to achieve results in good agreement with those obtained with spatial and temporal methods.

  16. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology.

    PubMed

    Cafri, Guy; Kromrey, Jeffrey D; Brannick, Michael T

    2010-03-31

    This article uses meta-analyses published in Psychological Bulletin from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual moderators in multivariate analyses, and tests of residual variability within individual levels of categorical moderators had the lowest and most concerning levels of power. Using methods of calculating power prospectively for significance tests in meta-analysis, we illustrate how power varies as a function of the number of effect sizes, the average sample size per effect size, effect size magnitude, and level of heterogeneity of effect sizes. In most meta-analyses many significance tests were conducted, resulting in a sizable estimated probability of a Type I error, particularly for tests of means within levels of a moderator, univariate categorical moderators, and residual variability within individual levels of a moderator. Across all surveyed studies, the median effect size and the median difference between two levels of study level moderators were smaller than Cohen's (1988) conventions for a medium effect size for a correlation or difference between two correlations. The median Birge's (1932) ratio was larger than the convention of medium heterogeneity proposed by Hedges and Pigott (2001) and indicates that the typical meta-analysis shows variability in underlying effects well beyond that expected by sampling error alone. Fixed-effects models were used with greater frequency than random-effects models; however, random-effects models were used with increased frequency over time. Results related to model selection of this study are carefully compared with those from Schmidt, Oh, and Hayes (2009), who independently designed and produced a study similar to the one reported here. Recommendations for conducting future meta-analyses

  17. Unlocking Data for Statistical Analyses and Data Mining: Generic Case Extraction of Clinical Items from i2b2 and tranSMART.

    PubMed

    Firnkorn, Daniel; Merker, Sebastian; Ganzinger, Matthias; Muley, Thomas; Knaup, Petra

    2016-01-01

    In medical science, modern IT concepts are increasingly important to gather new findings out of complex diseases. Data Warehouses (DWH) as central data repository systems play a key role by providing standardized, high-quality and secure medical data for effective analyses. However, DWHs in medicine must fulfil various requirements concerning data privacy and the ability to describe the complexity of (rare) disease phenomena. Here, i2b2 and tranSMART are free alternatives representing DWH solutions especially developed for medical informatics purposes. But different functionalities are not yet provided in a sufficient way. In fact, data import and export is still a major problem because of the diversity of schemas, parameter definitions and data quality which are described variously in each single clinic. Further, statistical analyses inside i2b2 and tranSMART are possible, but restricted to the implemented functions. Thus, data export is needed to provide a data basis which can be directly included within statistics software like SPSS and SAS or data mining tools like Weka and RapidMiner. The standard export tools of i2b2 and tranSMART are more or less creating a database dump of key-value pairs which cannot be used immediately by the mentioned tools. They need an instance-based or a case-based representation of each patient. To overcome this lack, we developed a concept called Generic Case Extractor (GCE) which pivots the key-value pairs of each clinical fact into a row-oriented format for each patient sufficient to enable analyses in a broader context. Therefore, complex pivotisation routines where necessary to ensure temporal consistency especially in terms of different data sets and the occurrence of identical but repeated parameters like follow-up data. GCE is embedded inside a comprehensive software platform for systems medicine. PMID:27577447

  18. Delineation and evaluation of hydrologic-landscape regions in the United States using geographic information system tools and multivariate statistical analyses.

    USGS Publications Warehouse

    Wolock, D.M.; Winter, T.C.; McMahon, G.

    2004-01-01

    Hydrologic-landscape regions in the United States were delineated by using geographic information system (GIS) tools combined with principal components and cluster analyses. The GIS and statistical analyses were applied to land-surface form, geologic texture (permeability of the soil and bedrock), and climate variables that describe the physical and climatic setting of 43,931 small (approximately 200 km2) watersheds in the United States. (The term "watersheds" is defined in this paper as the drainage areas of tributary streams, headwater streams, and stream segments lying between two confluences.) The analyses grouped the watersheds into 20 noncontiguous regions based on similarities in land-surface form, geologic texture, and climate characteristics. The percentage of explained variance (R-squared value) in an analysis of variance was used to compare the hydrologic-landscape regions to 19 square geometric regions and the 21 U.S. Environmental Protection Agency level-II ecoregions. Hydrologic-landscape regions generally were better than ecoregions at delineating regions of distinct land-surface form and geologic texture. Hydrologic-landscape regions and ecoregions were equally effective at defining regions in terms of climate, land cover, and water-quality characteristics. For about half of the landscape, climate, and water-quality characteristics, the R-squared values of square geometric regions were as high as hydrologic-landscape regions or ecoregions.

  19. Analysing the spatial patterns of livestock anthrax in Kazakhstan in relation to environmental factors: a comparison of local (Gi*) and morphology cluster statistics.

    PubMed

    Kracalik, Ian T; Blackburn, Jason K; Lukhnova, Larisa; Pazilov, Yerlan; Hugh-Jones, Martin E; Aikimbayev, Alim

    2012-11-01

    We compared a local clustering and a cluster morphology statistic using anthrax outbreaks in large (cattle) and small (sheep and goats) domestic ruminants across Kazakhstan. The Getis-Ord (Gi*) statistic and a multidirectional optimal ecotope algorithm (AMOEBA) were compared using 1st, 2nd and 3rd order Rook contiguity matrices. Multivariate statistical tests were used to evaluate the environmental signatures between clusters and non-clusters from the AMOEBA and Gi* tests. A logistic regression was used to define a risk surface for anthrax outbreaks and to compare agreement between clustering methodologies. Tests revealed differences in the spatial distribution of clusters as well as the total number of clusters in large ruminants for AMOEBA (n = 149) and for small ruminants (n = 9). In contrast, Gi* revealed fewer large ruminant clusters (n = 122) and more small ruminant clusters (n = 61). Significant environmental differences were found between groups using the Kruskall-Wallis and Mann-Whitney U tests. Logistic regression was used to model the presence/absence of anthrax outbreaks and define a risk surface for large ruminants to compare with cluster analyses. The model predicted 32.2% of the landscape as high risk. Approximately 75% of AMOEBA clusters corresponded to predicted high risk, compared with ~64% of Gi* clusters. In general, AMOEBA predicted more irregularly shaped clusters of outbreaks in both livestock groups, while Gi* tended to predict larger, circular clusters. Here we provide an evaluation of both tests and a discussion of the use of each to detect environmental conditions associated with anthrax outbreak clusters in domestic livestock. These findings illustrate important differences in spatial statistical methods for defining local clusters and highlight the importance of selecting appropriate levels of data aggregation. PMID:23242686

  20. Analysing the spatial patterns of livestock anthrax in Kazakhstan in relation to environmental factors: a comparison of local (Gi*) and morphology cluster statistics.

    PubMed

    Kracalik, Ian T; Blackburn, Jason K; Lukhnova, Larisa; Pazilov, Yerlan; Hugh-Jones, Martin E; Aikimbayev, Alim

    2012-11-01

    We compared a local clustering and a cluster morphology statistic using anthrax outbreaks in large (cattle) and small (sheep and goats) domestic ruminants across Kazakhstan. The Getis-Ord (Gi*) statistic and a multidirectional optimal ecotope algorithm (AMOEBA) were compared using 1st, 2nd and 3rd order Rook contiguity matrices. Multivariate statistical tests were used to evaluate the environmental signatures between clusters and non-clusters from the AMOEBA and Gi* tests. A logistic regression was used to define a risk surface for anthrax outbreaks and to compare agreement between clustering methodologies. Tests revealed differences in the spatial distribution of clusters as well as the total number of clusters in large ruminants for AMOEBA (n = 149) and for small ruminants (n = 9). In contrast, Gi* revealed fewer large ruminant clusters (n = 122) and more small ruminant clusters (n = 61). Significant environmental differences were found between groups using the Kruskall-Wallis and Mann-Whitney U tests. Logistic regression was used to model the presence/absence of anthrax outbreaks and define a risk surface for large ruminants to compare with cluster analyses. The model predicted 32.2% of the landscape as high risk. Approximately 75% of AMOEBA clusters corresponded to predicted high risk, compared with ~64% of Gi* clusters. In general, AMOEBA predicted more irregularly shaped clusters of outbreaks in both livestock groups, while Gi* tended to predict larger, circular clusters. Here we provide an evaluation of both tests and a discussion of the use of each to detect environmental conditions associated with anthrax outbreak clusters in domestic livestock. These findings illustrate important differences in spatial statistical methods for defining local clusters and highlight the importance of selecting appropriate levels of data aggregation.

  1. Statistical properties of coastal long waves analysed through sea-level time-gradient functions: exemplary analysis of the Siracusa, Italy, tide-gauge data

    NASA Astrophysics Data System (ADS)

    Bressan, L.; Tinti, S.

    2015-09-01

    This study presents a new method to analyse the properties of the sea-level signal recorded by coastal tide gauges in the long wave range that is in a window between wind/storm waves and tides and is typical of several phenomena like local seiches, coastal shelf resonances and tsunamis. The method consists of computing four specific functions based on the time gradient (slope) of the recorded sea level oscillations, namely the instantaneous slope IS, and three more functions based on IS, that are the sea level SL, the background slope BS and the control function CF. These functions are examined through a traditional spectral FFT analysis and also through a statistical analysis showing that they can be characterised by probability distribution functions PDFs such as the Student's t distribution (IS and SL) and the Beta distribution (CF). As an example, the method has been applied to data from the tide-gauge station of Siracusa, Italy.

  2. How different statistical analyses of grain size data can be used for facies determination and palaeoenvironmental reconstructions - an example from the Black Sea coast of Georgia

    NASA Astrophysics Data System (ADS)

    Riedesel, Svenja; Opitz, Stephan; Kelterbaum, Daniel; Laermanns, Hannes; Seeliger, Martin; Rölkens, Julian; Elashvili, Mikaheil; Brueckner, Helmut

    2016-04-01

    Granulometric analyses enable precise and significant statements about sediment transport processes and depositional environments. Bivariate statistics of graphical parameters (mean, sorting, skewness, kurtosis) of grain-size distributions, open the opportunity of grain-size analysis in context of sediment transport patterns, to differentiate between several depositional environments. While such approaches may be limited to unimodal grain-size distributions, the statistical method of the end-member modelling algorithm (EMMA) was created to solve the explicit mixing problem of multimodal grain-size distributions. EMMA enables the extraction of robust end-members from the original dataset. A comparison of extracted end-members with recent surface sample's grain-size distributions allows assumptions for transport processes and depositional environments. Bivariate statistics of graphical grain-size parameters and EMMA were performed on a 9 m long sediment record, taken from a beach ridge sequence at the coastal area of western Georgia. Whereas biplots of calculated parameters give valid information of modern environments, this method fails for the reconstruction of palaeoenvironments. However, by applying EMMA it is possible to extract four robust end-members and combine them with grain-size distributions of modern surface samples. Results gained from EMMA, indicate a threefold of the sediment core (Unit 1, 2 and 3 - from bottom to the top). End-members (EM) 1 and 2 show multimodal grain-size distributions, quite similar to the distributions of modern low-energy fluvial deposits. Such comparable distributions do not indicate exactly the same transport system of present and past, but give a hint on the energy level and the flow velocity of the transport medium. Whereas EM 1 and 2 represent most of the relative EM amount from Unit 2, EM 3 and 4 dominate Unit 1 and 3. They are represented by unimodal distributions, only differing by the position of their peak, which is

  3. Categorization of the trophic status of a hydroelectric power plant reservoir in the Brazilian Amazon by statistical analyses and fuzzy approaches.

    PubMed

    da Costa Lobato, Tarcísio; Hauser-Davis, Rachel Ann; de Oliveira, Terezinha Ferreira; Maciel, Marinalva Cardoso; Tavares, Maria Regina Madruga; da Silveira, Antônio Morais; Saraiva, Augusto Cesar Fonseca

    2015-02-15

    The Amazon area has been increasingly suffering from anthropogenic impacts, especially due to the construction of hydroelectric power plant reservoirs. The analysis and categorization of the trophic status of these reservoirs are of interest to indicate man-made changes in the environment. In this context, the present study aimed to categorize the trophic status of a hydroelectric power plant reservoir located in the Brazilian Amazon by constructing a novel Water Quality Index (WQI) and Trophic State Index (TSI) for the reservoir using major ion concentrations and physico-chemical water parameters determined in the area and taking into account the sampling locations and the local hydrological regimes. After applying statistical analyses (factor analysis and cluster analysis) and establishing a rule base of a fuzzy system to these indicators, the results obtained by the proposed method were then compared to the generally applied Carlson and a modified Lamparelli trophic state index (TSI), specific for trophic regions. The categorization of the trophic status by the proposed fuzzy method was shown to be more reliable, since it takes into account the specificities of the study area, while the Carlson and Lamparelli TSI do not, and, thus, tend to over or underestimate the trophic status of these ecosystems. The statistical techniques proposed and applied in the present study, are, therefore, relevant in cases of environmental management and policy decision-making processes, aiding in the identification of the ecological status of water bodies. With this, it is possible to identify which factors should be further investigated and/or adjusted in order to attempt the recovery of degraded water bodies.

  4. A preliminary study of the statistical analyses and sampling strategies associated with the integration of remote sensing capabilities into the current agricultural crop forecasting system

    NASA Technical Reports Server (NTRS)

    Sand, F.; Christie, R.

    1975-01-01

    Extending the crop survey application of remote sensing from small experimental regions to state and national levels requires that a sample of agricultural fields be chosen for remote sensing of crop acreage, and that a statistical estimate be formulated with measurable characteristics. The critical requirements for the success of the application are reviewed in this report. The problem of sampling in the presence of cloud cover is discussed. Integration of remotely sensed information about crops into current agricultural crop forecasting systems is treated on the basis of the USDA multiple frame survey concepts, with an assumed addition of a new frame derived from remote sensing. Evolution of a crop forecasting system which utilizes LANDSAT and future remote sensing systems is projected for the 1975-1990 time frame.

  5. Novel Flow Cytometry Analyses of Boar Sperm Viability: Can the Addition of Whole Sperm-Rich Fraction Seminal Plasma to Frozen-Thawed Boar Sperm Affect It?

    PubMed Central

    Díaz, Rommy; Boguen, Rodrigo; Martins, Simone Maria Massami Kitamura; Ravagnani, Gisele Mouro; Leal, Diego Feitosa; Oliveira, Melissa de Lima; Muro, Bruno Bracco Donatelli; Parra, Beatriz Martins; Meirelles, Flávio Vieira; Papa, Frederico Ozanan; Dell’Aqua, José Antônio; Alvarenga, Marco Antônio; Moretti, Aníbal de Sant’Anna; Sepúlveda, Néstor

    2016-01-01

    Boar semen cryopreservation remains a challenge due to the extension of cold shock damage. Thus, many alternatives have emerged to improve the quality of frozen-thawed boar sperm. Although the use of seminal plasma arising from boar sperm-rich fraction (SP-SRF) has shown good efficacy; however, the majority of actual sperm evaluation techniques include a single or dual sperm parameter analysis, which overrates the real sperm viability. Within this context, this work was performed to introduce a sperm flow cytometry fourfold stain technique for simultaneous evaluation of plasma and acrosomal membrane integrity and mitochondrial membrane potential. We then used the sperm flow cytometry fourfold stain technique to study the effect of SP-SRF on frozen-thawed boar sperm and further evaluated the effect of this treatment on sperm movement, tyrosine phosphorylation and fertility rate (FR). The sperm fourfold stain technique is accurate (R2 = 0.9356, p > 0.01) for simultaneous evaluation of plasma and acrosomal membrane integrity and mitochondrial membrane potential (IPIAH cells). Centrifugation pre-cryopreservation was not deleterious (p > 0.05) for any analyzed variables. Addition of SP-SRF after cryopreservation was able to improve total and progressive motility (p < 0.05) when boar semen was cryopreserved without SP-SRF; however, it was not able to decrease tyrosine phosphorylation (p > 0.05) or improve IPIAH cells (p > 0.05). FR was not (p > 0.05) statistically increased by the addition of seminal plasma, though females inseminated with frozen-thawed boar semen plus SP-SRF did perform better than those inseminated with sperm lacking seminal plasma. Thus, we conclude that sperm fourfold stain can be used to simultaneously evaluate plasma and acrosomal membrane integrity and mitochondrial membrane potential, and the addition of SP-SRF at thawed boar semen cryopreserved in absence of SP-SRF improve its total and progressive motility. PMID:27529819

  6. Statistical properties of coastal long waves analysed through sea-level time-gradient functions: exemplary analysis of the Siracusa, Italy, tide-gauge data

    NASA Astrophysics Data System (ADS)

    Bressan, L.; Tinti, S.

    2016-01-01

    This study presents a new method to analyse the properties of the sea-level signal recorded by coastal tide gauges in the long wave range that is in a window between wind/storm waves and tides and is typical of several phenomena like local seiches, coastal shelf resonances and tsunamis. The method consists of computing four specific functions based on the time gradient (slope) of the recorded sea level oscillations, namely the instantaneous slope (IS) as well as three more functions based on IS, namely the reconstructed sea level (RSL), the background slope (BS) and the control function (CF). These functions are examined through a traditional spectral fast Fourier transform (FFT) analysis and also through a statistical analysis, showing that they can be characterised by probability distribution functions PDFs such as the Student's t distribution (IS and RSL) and the beta distribution (CF). As an example, the method has been applied to data from the tide-gauge station of Siracusa, Italy.

  7. The use of mass spectrometry for analysing metabolite biomarkers in epidemiology: methodological and statistical considerations for application to large numbers of biological samples.

    PubMed

    Lind, Mads V; Savolainen, Otto I; Ross, Alastair B

    2016-08-01

    Data quality is critical for epidemiology, and as scientific understanding expands, the range of data available for epidemiological studies and the types of tools used for measurement have also expanded. It is essential for the epidemiologist to have a grasp of the issues involved with different measurement tools. One tool that is increasingly being used for measuring biomarkers in epidemiological cohorts is mass spectrometry (MS), because of the high specificity and sensitivity of MS-based methods and the expanding range of biomarkers that can be measured. Further, the ability of MS to quantify many biomarkers simultaneously is advantageously compared to single biomarker methods. However, as with all methods used to measure biomarkers, there are a number of pitfalls to consider which may have an impact on results when used in epidemiology. In this review we discuss the use of MS for biomarker analyses, focusing on metabolites and their application and potential issues related to large-scale epidemiology studies, the use of MS "omics" approaches for biomarker discovery and how MS-based results can be used for increasing biological knowledge gained from epidemiological studies. Better understanding of the possibilities and possible problems related to MS-based measurements will help the epidemiologist in their discussions with analytical chemists and lead to the use of the most appropriate statistical tools for these data. PMID:27230258

  8. Statistical Diversions

    ERIC Educational Resources Information Center

    Petocz, Peter; Sowey, Eric

    2012-01-01

    The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the statistical…

  9. Experimental and statistical analyses to characterize in-vehicle fine particulate matter behavior inside public transit buses operating on B20-grade biodiesel fuel

    NASA Astrophysics Data System (ADS)

    Vijayan, Abhilash; Kumar, Ashok

    2010-11-01

    This paper presents results from an in-vehicle air quality study of public transit buses in Toledo, Ohio, involving continuous monitoring, and experimental and statistical analyses to understand in-vehicle particulate matter (PM) behavior inside buses operating on B20-grade biodiesel fuel. The study also focused on evaluating the effects of vehicle's fuel type, operating periods, operation status, passenger counts, traffic conditions, and the seasonal and meteorological variation on particulates with aerodynamic diameter less than 1 micron (PM 1.0). The study found that the average PM 1.0 mass concentrations in B20-grade biodiesel-fueled bus compartments were approximately 15 μg m -3, while PM 2.5 and PM 10 concentration averages were approximately 19 μg m -3 and 37 μg m -3, respectively. It was also observed that average hourly concentration trends of PM 1.0 and PM 2.5 followed a "μ-shaped" pattern during transit hours. Experimental analyses revealed that the in-vehicle PM 1.0 mass concentrations were higher inside diesel-fueled buses (10.0-71.0 μg m -3 with a mean of 31.8 μg m -3) as compared to biodiesel buses (3.3-33.5 μg m -3 with a mean of 15.3 μg m -3) when the windows were kept open. Vehicle idling conditions and open door status were found to facilitate smaller particle concentrations inside the cabin, while closed door facilitated larger particle concentrations suggesting that smaller particles were originating outside the vehicle and larger particles were formed within the cabin, potentially from passenger activity. The study also found that PM 1.0 mass concentrations at the back of bus compartment (5.7-39.1 μg m -3 with a mean of 28.3 μg m -3) were higher than the concentrations in the front (5.7-25.9 μg m -3 with a mean of 21.9 μg m -3), and the mass concentrations inside the bus compartment were generally 30-70% lower than the just-outside concentrations. Further, bus route, window position, and time of day were found to affect the in

  10. Analysing spatio-temporal patterns of the global NO2-distribution retrieved from GOME satellite observations using a generalized additive model

    NASA Astrophysics Data System (ADS)

    Hayn, M.; Beirle, S.; Hamprecht, F. A.; Platt, U.; Menze, B. H.; Wagner, T.

    2009-09-01

    With the increasing availability of observational data from different sources at a global level, joint analysis of these data is becoming especially attractive. For such an analysis - oftentimes with little prior knowledge about local and global interactions between the different observational variables at hand - an exploratory, data-driven analysis of the data may be of particular relevance. In the present work we used generalized additive models (GAM) in an exemplary study of spatio-temporal patterns in the tropospheric NO2-distribution derived from GOME satellite observations (1996 to 2001) at global scale. We focused on identifying correlations between NO2 and local wind fields, a quantity which is of particular interest in the analysis of spatio-temporal interactions. Formulating general functional, parametric relationships between the observed NO2 distribution and local wind fields, however, is difficult - if not impossible. So, rather than following a model-based analysis testing the data for predefined hypotheses (assuming, for example, sinusoidal seasonal trends), we used a GAM with non-parametric model terms to learn this functional relationship between NO2 and wind directly from the data. The NO2 observations showed to be affected by wind-dominated processes over large areas. We estimated the extent of areas affected by specific NO2 emission sources, and were able to highlight likely atmospheric transport "pathways". General temporal trends which were also part of our model - weekly, seasonal and linear changes - showed to be in good agreement with previous studies and alternative ways of analysing the time series. Overall, using a non-parametric model provided favorable means for a rapid inspection of this large spatio-temporal NO2 data set, with less bias than parametric approaches, and allowing to visualize dynamical processes of the NO2 distribution at a global scale.

  11. Impact of enzalutamide on quality of life in men with metastatic castration-resistant prostate cancer after chemotherapy: additional analyses from the AFFIRM randomized clinical trial

    PubMed Central

    Cella, D.; Ivanescu, C.; Holmstrom, S.; Bui, C. N.; Spalding, J.; Fizazi, K.

    2015-01-01

    Background To present longitudinal changes in Functional Assessment of Cancer Therapy-Prostate (FACT-P) scores during 25-week treatment with enzalutamide or placebo in men with progressive metastatic castration-resistant prostate cancer (mCRPC) after chemotherapy in the AFFIRM trial. Patients and methods Patients were randomly assigned to enzalutamide 160 mg/day or placebo. FACT-P was completed before randomization, at weeks 13, 17, 21, and 25, and every 12 weeks thereafter while on study treatment. Longitudinal changes in FACT-P scores from baseline to 25 weeks were analyzed using a mixed effects model for repeated measures (MMRM), with a pattern mixture model (PMM) applied as secondary analysis to address non-ignorable missing data. Cumulative distribution function (CDF) plots were generated and different methodological approaches and models for handling missing data were applied. Due to the exploratory nature of the analyses, adjustments for multiple comparisons were not made. AFFIRM is registered with ClinicalTrials.gov, number NCT00974311. Results The intention-to-treat FACT-P population included 938 patients (enzalutamide, n = 674; placebo n = 264) with evaluable FACT-P assessments at baseline and ≥1 post-baseline assessment. After 25 weeks, the mean FACT-P total score decreased by 1.52 points with enzalutamide compared with 13.73 points with placebo (P < 0.001). In addition, significant treatment differences at week 25 favoring enzalutamide were evident for all FACT-P subscales and indices, whether analyzed by MMRM or PMM. CDF plots revealed differences favoring enzalutamide compared with placebo across the full range of possible response levels for FACT-P total and all disease- and symptom-specific subscales/indices. Conclusion In men with progressive mCRPC after docetaxel-based chemotherapy, enzalutamide is superior to placebo in health-related quality-of-life outcomes, regardless of analysis model or threshold selected for meaningful response. Clinical

  12. Multielement chemical and statistical analyses from a uranium hydrogeochemical and stream-sediment survey in and near the Elkhorn Mountains, Jefferson County, Montana; Part I, Surface water

    USGS Publications Warehouse

    Suits, V.J.; Wenrich, K.J.

    1982-01-01

    Fifty-two surface-water samples, collected from an area south of Helena, Jefferson County, were analyzed for 51 chemical species. Of these variables, 35 showed detectable variation over the area, and 29 were utilized in a correlation analysis. Two populations are distinguished in the collected samples and are especially evident in the plot of Ca versus U. Samples separated on the basis of U versus Ca proved to represent drainage areas of two differing lithologies. One group was from waters that drain the Boulder batholith, the other from those that drain the Elkhorn Mountains volcanic rocks. These two groups of samples, in general, proved to have parallel but different linear trends between U and other elements. Therefore, the two groups of samples were treated separately in the statistical analyses. Over the area that drains the Boulder batholith, U concentrations in water ranged from 0.37 to 13.0 ?g/l , with a mean of 1.9 ?g/l. The samples from streams draining volcanic areas ranged from 0.04 to 1.5 ?g/l, with a mean of 0.42 ?g/l. The highest U values (12 and 13 ?g/l) occur along Badger Creek, Rawhide Creek, Little Buffalo Gulch, and an unnamed tributary to Clancy Creek. Conductivity, hardness, Ba, Ca, CI, K, Mg, Na and Sr are significantly correlated with U at or better than the 95 percent confidence limit in both populations. For water draining the Boulder batholith, uranium correlates significantly with akalinity, pH, bicarbonate, Li, Mo, NO2+NO3, P04, SiO2, SO4, F, and inorganic carbon. These correlations are similar to those found in a previous study of water samples in north-central New Mexico (Wenrich-Verbeek, 1977b). Uranium in water from the volcanic terrane does not show correlations with any of the above constituents, but does correlate well with V. This relationship with V is absent within the Boulder batholith samples.

  13. Statistical Analyses Comparing Prismatic Magnetite Crystals in ALH84001 Carbonate Globules with those from the Terrestrial Magnetotactic Bacteria Strain MV-1

    NASA Technical Reports Server (NTRS)

    Thomas-Keprta, Kathie L.; Clemett, Simon J.; Bazylinski, Dennis A.; Kirschvink, Joseph L.; McKay, David S.; Wentworth, Susan J.; Vali, H.; Gibson, Everett K.

    2000-01-01

    Here we use rigorous mathematical modeling to compare ALH84001 prismatic magnetites with those produced by terrestrial magnetotactic bacteria, MV-1. We find that this subset of the Martian magnetites appears to be statistically indistinguishable from those of MV-1.

  14. Monitoring the quality consistency of Weibizhi tablets by micellar electrokinetic chromatography fingerprints combined with multivariate statistical analyses, the simple quantified ratio fingerprint method, and the fingerprint-efficacy relationship.

    PubMed

    Liu, Yingchun; Sun, Guoxiang; Wang, Yan; Yang, Lanping; Yang, Fangliang

    2015-06-01

    Micellar electrokinetic chromatography fingerprinting combined with quantification was successfully developed and applied to monitor the quality consistency of Weibizhi tablets, which is a classical compound preparation used to treat gastric ulcers. A background electrolyte composed of 57 mmol/L sodium borate, 21 mmol/L sodium dodecylsulfate and 100 mmol/L sodium hydroxide was used to separate compounds. To optimize capillary electrophoresis conditions, multivariate statistical analyses were applied. First, the most important factors influencing sample electrophoretic behavior were identified as background electrolyte concentrations. Then, a Box-Benhnken design response surface strategy using resolution index RF as an integrated response was set up to correlate factors with response. RF reflects the effective signal amount, resolution, and signal homogenization in an electropherogram, thus, it was regarded as an excellent indicator. In fingerprint assessments, simple quantified ratio fingerprint method was established for comprehensive quality discrimination of traditional Chinese medicines/herbal medicines from qualitative and quantitative perspectives, by which the quality of 27 samples from the same manufacturer were well differentiated. In addition, the fingerprint-efficacy relationship between fingerprints and antioxidant activities was established using partial least squares regression, which provided important medicinal efficacy information for quality control. The present study offered an efficient means for monitoring Weibizhi tablet quality consistency.

  15. Statistics Clinic

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  16. A FORTRAN 77 Program and User's Guide for the Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    SciTech Connect

    Helton, Jon C.; Shortencarier, Maichael J.

    1999-08-01

    A description and user's guide are given for a computer program, PATTRN, developed at Sandia National Laboratories for use in sensitivity analyses of complex models. This program is intended for use in the analysis of input-output relationships in Monte Carlo analyses when the input has been selected using random or Latin hypercube sampling. Procedures incorporated into the program are based upon attempts to detect increasingly complex patterns in scatterplots and involve the detection of linear relationships, monotonic relationships, trends in measures of central tendency, trends in measures of variability, and deviations from randomness. The program was designed to be easy to use and portable.

  17. Comparing paired vs non-paired statistical methods of analyses when making inferences about absolute risk reductions in propensity-score matched samples.

    PubMed

    Austin, Peter C

    2011-05-20

    Propensity-score matching allows one to reduce the effects of treatment-selection bias or confounding when estimating the effects of treatments when using observational data. Some authors have suggested that methods of inference appropriate for independent samples can be used for assessing the statistical significance of treatment effects when using propensity-score matching. Indeed, many authors in the applied medical literature use methods for independent samples when making inferences about treatment effects using propensity-score matched samples. Dichotomous outcomes are common in healthcare research. In this study, we used Monte Carlo simulations to examine the effect on inferences about risk differences (or absolute risk reductions) when statistical methods for independent samples are used compared with when statistical methods for paired samples are used in propensity-score matched samples. We found that compared with using methods for independent samples, the use of methods for paired samples resulted in: (i) empirical type I error rates that were closer to the advertised rate; (ii) empirical coverage rates of 95 per cent confidence intervals that were closer to the advertised rate; (iii) narrower 95 per cent confidence intervals; and (iv) estimated standard errors that more closely reflected the sampling variability of the estimated risk difference. Differences between the empirical and advertised performance of methods for independent samples were greater when the treatment-selection process was stronger compared with when treatment-selection process was weaker. We recommend using statistical methods for paired samples when using propensity-score matched samples for making inferences on the effect of treatment on the reduction in the probability of an event occurring.

  18. Additive-dominance genetic model analyses for late-maturity alpha-amylase activity in a bread wheat factorial crossing population.

    PubMed

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Ibrahim, Amir M H

    2015-12-01

    Elevated level of late maturity α-amylase activity (LMAA) can result in low falling number scores, reduced grain quality, and downgrade of wheat (Triticum aestivum L.) class. A mating population was developed by crossing parents with different levels of LMAA. The F2 and F3 hybrids and their parents were evaluated for LMAA, and data were analyzed using the R software package 'qgtools' integrated with an additive-dominance genetic model and a mixed linear model approach. Simulated results showed high testing powers for additive and additive × environment variances, and comparatively low powers for dominance and dominance × environment variances. All variance components and their proportions to the phenotypic variance for the parents and hybrids were significant except for the dominance × environment variance. The estimated narrow-sense heritability and broad-sense heritability for LMAA were 14 and 54%, respectively. High significant negative additive effects for parents suggest that spring wheat cultivars 'Lancer' and 'Chester' can serve as good general combiners, and that 'Kinsman' and 'Seri-82' had negative specific combining ability in some hybrids despite of their own significant positive additive effects, suggesting they can be used as parents to reduce LMAA levels. Seri-82 showed very good general combining ability effect when used as a male parent, indicating the importance of reciprocal effects. High significant negative dominance effects and high-parent heterosis for hybrids demonstrated that the specific hybrid combinations; Chester × Kinsman, 'Lerma52' × Lancer, Lerma52 × 'LoSprout' and 'Janz' × Seri-82 could be generated to produce cultivars with significantly reduced LMAA level.

  19. Suite versus composite statistics

    USGS Publications Warehouse

    Balsillie, J.H.; Tanner, W.F.

    1999-01-01

    Suite and composite methodologies, two statistically valid approaches for producing statistical descriptive measures, are investigated for sample groups representing a probability distribution where, in addition, each sample is probability distribution. Suite and composite means (first moment measures) are always equivalent. Composite standard deviations (second moment measures) are always larger than suite standard deviations. Suite and composite values for higher moment measures have more complex relationships. Very seldom, however, are they equivalent, and they normally yield statistically significant but different results. Multiple samples are preferable to single samples (including composites) because they permit the investigator to examine sample-to-sample variability. These and other relationships for suite and composite probability distribution analyses are investigated and reported using granulometric data.

  20. Statistical Analyses of Satellite Cloud Object Data from CERES. Part II; Tropical Convective Cloud Objects During 1998 El Nino and Validation of the Fixed Anvil Temperature Hypothesis

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man; Wong, Takmeng; Wielicki, Bruce a.; Parker, Lindsay; Lin, Bing; Eitzen, Zachary A.; Branson, Mark

    2006-01-01

    Characteristics of tropical deep convective cloud objects observed over the tropical Pacific during January-August 1998 are examined using the Tropical Rainfall Measuring Mission/ Clouds and the Earth s Radiant Energy System single scanner footprint (SSF) data. These characteristics include the frequencies of occurrence and statistical distributions of cloud physical properties. Their variations with cloud-object size, sea surface temperature (SST), and satellite precessing cycle are analyzed in detail. A cloud object is defined as a contiguous patch of the Earth composed of satellite footprints within a single dominant cloud-system type. It is found that statistical distributions of cloud physical properties are significantly different among three size categories of cloud objects with equivalent diameters of 100 - 150 km (small), 150 - 300 km (medium), and > 300 km (large), respectively, except for the distributions of ice particle size. The distributions for the larger-size category of cloud objects are more skewed towards high SSTs, high cloud tops, low cloud-top temperature, large ice water path, high cloud optical depth, low outgoing longwave (LW) radiation, and high albedo than the smaller-size category. As SST varied from one satellite precessing cycle to another, the changes in macrophysical properties of cloud objects over the entire tropical Pacific were small for the large-size category of cloud objects, relative to those of the small- and medium-size categories. This result suggests that the fixed anvil temperature hypothesis of Hartmann and Larson may be valid for the large-size category. Combining with the result that a higher percentage of the large-size category of cloud objects occurs during higher SST subperiods, this implies that macrophysical properties of cloud objects would be less sensitive to further warming of the climate. On the other hand, when cloud objects are classified according to SSTs where large-scale dynamics plays important roles

  1. Estimability and simple dynamical analyses of range (range-rate range-difference) observations to artificial satellites. [laser range observations to LAGEOS using non-Bayesian statistics

    NASA Technical Reports Server (NTRS)

    Vangelder, B. H. W.

    1978-01-01

    Non-Bayesian statistics were used in simulation studies centered around laser range observations to LAGEOS. The capabilities of satellite laser ranging especially in connection with relative station positioning are evaluated. The satellite measurement system under investigation may fall short in precise determinations of the earth's orientation (precession and nutation) and earth's rotation as opposed to systems as very long baseline interferometry (VLBI) and lunar laser ranging (LLR). Relative station positioning, determination of (differential) polar motion, positioning of stations with respect to the earth's center of mass and determination of the earth's gravity field should be easily realized by satellite laser ranging (SLR). The last two features should be considered as best (or solely) determinable by SLR in contrast to VLBI and LLR.

  2. Analysis of the Human Adult Urinary Metabolome Variations with Age, Body Mass Index, and Gender by Implementing a Comprehensive Workflow for Univariate and OPLS Statistical Analyses.

    PubMed

    Thévenot, Etienne A; Roux, Aurélie; Xu, Ying; Ezan, Eric; Junot, Christophe

    2015-08-01

    Urine metabolomics is widely used for biomarker research in the fields of medicine and toxicology. As a consequence, characterization of the variations of the urine metabolome under basal conditions becomes critical in order to avoid confounding effects in cohort studies. Such physiological information is however very scarce in the literature and in metabolomics databases so far. Here we studied the influence of age, body mass index (BMI), and gender on metabolite concentrations in a large cohort of 183 adults by using liquid chromatography coupled with high-resolution mass spectrometry (LC-HRMS). We implemented a comprehensive statistical workflow for univariate hypothesis testing and modeling by orthogonal partial least-squares (OPLS), which we made available to the metabolomics community within the online Workflow4Metabolomics.org resource. We found 108 urine metabolites displaying concentration variations with either age, BMI, or gender, by integrating the results from univariate p-values and multivariate variable importance in projection (VIP). Several metabolite clusters were further evidenced by correlation analysis, and they allowed stratification of the cohort. In conclusion, our study highlights the impact of gender and age on the urinary metabolome, and thus it indicates that these factors should be taken into account for the design of metabolomics studies.

  3. A graphical user interface (GUI) toolkit for the calculation of three-dimensional (3D) multi-phase biological effective dose (BED) distributions including statistical analyses.

    PubMed

    Kauweloa, Kevin I; Gutierrez, Alonso N; Stathakis, Sotirios; Papanikolaou, Niko; Mavroidis, Panayiotis

    2016-07-01

    A toolkit has been developed for calculating the 3-dimensional biological effective dose (BED) distributions in multi-phase, external beam radiotherapy treatments such as those applied in liver stereotactic body radiation therapy (SBRT) and in multi-prescription treatments. This toolkit also provides a wide range of statistical results related to dose and BED distributions. MATLAB 2010a, version 7.10 was used to create this GUI toolkit. The input data consist of the dose distribution matrices, organ contour coordinates, and treatment planning parameters from the treatment planning system (TPS). The toolkit has the capability of calculating the multi-phase BED distributions using different formulas (denoted as true and approximate). Following the calculations of the BED distributions, the dose and BED distributions can be viewed in different projections (e.g. coronal, sagittal and transverse). The different elements of this toolkit are presented and the important steps for the execution of its calculations are illustrated. The toolkit is applied on brain, head & neck and prostate cancer patients, who received primary and boost phases in order to demonstrate its capability in calculating BED distributions, as well as measuring the inaccuracy and imprecision of the approximate BED distributions. Finally, the clinical situations in which the use of the present toolkit would have a significant clinical impact are indicated.

  4. The Enigma of the Dichotomic Pressure Response of GluN1-4a/b Splice Variants of NMDA Receptor: Experimental and Statistical Analyses

    PubMed Central

    Bliznyuk, Alice; Gradwohl, Gideon; Hollmann, Michael; Grossman, Yoram

    2016-01-01

    Professional deep-water divers, exposed to hyperbaric pressure (HP) above 1.1 MPa, develop High Pressure Neurological Syndrome (HPNS), which is associated with central nervous system (CNS) hyperexcitability. It was previously reported that HP augments N-methyl-D-aspartate receptor (NMDAR) synaptic response, increases neuronal excitability and potentially causes irreversible neuronal damage. Our laboratory has reported differential current responses under HP conditions in NMDAR subtypes that contain either GluN1-1a or GluN1-1b splice variants co-expressed in Xenopus laevis oocytes with all four GluN2 subunits. Recently, we reported that the increase in ionic currents measured under HP conditions is also dependent on which of the eight splice variants of GluN1 is co-expressed with the GluN2 subunit. We now report that the NMDAR subtype that contains GluN1-4a/b splice variants exhibited “dichotomic” (either increased or decreased) responses at HP. The distribution of the results is not normal thus analysis of variance (ANOVA) test and clustering analysis were employed for statistical verification of the grouping. Furthermore, the calculated constants of alpha function distribution analysis for the two groups were similar, suggesting that the mechanism underlying the switch between an increase or a decrease of the current at HP is a single process, the nature of which is still unknown. This dichotomic response of the GluN1-4a/b splice variant may be used as a model for studying reduced response in NMDAR at HP. Successful reversal of other NMDAR subtypes response (i.e., current reduction) may allow the elimination of the reversible malfunctioning short term effects (HPNS), or even deleterious long term effects induced by increased NMDAR function during HP exposure. PMID:27375428

  5. Analysing spatio-temporal patterns of the global NO2-distribution retrieved from GOME satellite observations using a generalized additive model

    NASA Astrophysics Data System (ADS)

    Hayn, M.; Beirle, S.; Hamprecht, F. A.; Platt, U.; Menze, B. H.; Wagner, T.

    2009-04-01

    With the increasing availability of observations from different space-borne sensors, the joint analysis of observational data from multiple sources becomes more and more attractive. For such an analysis - oftentimes with little prior knowledge about local and global interactions between the different observational variables available - an explorative data-driven analysis of the remote sensing data may be of particular relevance. In the present work we used generalized additive models (GAM) in this task, in an exemplary study of spatio-temporal patterns in the tropospheric NO2-distribution derived from GOME satellite observations (1996 to 2001) at global scale. We modelled different temporal trends in the time series of the observed NO2, but focused on identifying correlations between NO2 and local wind fields. Here, our nonparametric modelling approach had several advantages over standard parametric models: While the model-based analysis allowed to test predefined hypotheses (assuming, for example, sinusoidal seasonal trends) only, the GAM allowed to learn functional relations between different observational variables directly from the data. This was of particular interest in the present task, as little was known about relations between the observed NO2 distribution and transport processes by local wind fields, and the formulation of general functional relationships to be tested remained difficult. We found the observed temporal trends - weekly, seasonal and linear changes - to be in overall good agreement with previous studies and alternative ways of data analysis. However, NO2 observations showed to be affected by wind-dominated processes over several areas, world wide. Here we were able to estimate the extent of areas affected by specific NO2 emission sources, and to highlight likely atmospheric transport pathways. Overall, using a nonparametric model provided favourable means for a rapid inspection of this large spatio-temporal data set,with less bias than

  6. Cosmic statistics of statistics

    NASA Astrophysics Data System (ADS)

    Szapudi, István; Colombi, Stéphane; Bernardeau, Francis

    1999-12-01

    The errors on statistics measured in finite galaxy catalogues are exhaustively investigated. The theory of errors on factorial moments by Szapudi & Colombi is applied to cumulants via a series expansion method. All results are subsequently extended to the weakly non-linear regime. Together with previous investigations this yields an analytic theory of the errors for moments and connected moments of counts in cells from highly non-linear to weakly non-linear scales. For non-linear functions of unbiased estimators, such as the cumulants, the phenomenon of cosmic bias is identified and computed. Since it is subdued by the cosmic errors in the range of applicability of the theory, correction for it is inconsequential. In addition, the method of Colombi, Szapudi & Szalay concerning sampling effects is generalized, adapting the theory for inhomogeneous galaxy catalogues. While previous work focused on the variance only, the present article calculates the cross-correlations between moments and connected moments as well for a statistically complete description. The final analytic formulae representing the full theory are explicit but somewhat complicated. Therefore we have made available a fortran program capable of calculating the described quantities numerically (for further details e-mail SC at colombi@iap.fr). An important special case is the evaluation of the errors on the two-point correlation function, for which this should be more accurate than any method put forward previously. This tool will be immensely useful in the future for assessing the precision of measurements from existing catalogues, as well as aiding the design of new galaxy surveys. To illustrate the applicability of the results and to explore the numerical aspects of the theory qualitatively and quantitatively, the errors and cross-correlations are predicted under a wide range of assumptions for the future Sloan Digital Sky Survey. The principal results concerning the cumulants ξ, Q3 and Q4 is that

  7. Statistical analyses of volleyball team performance.

    PubMed

    Eom, H J; Schutz, R W

    1992-03-01

    The purpose of this study was to investigate the playing characteristics of team performance in international men's volleyball. The specific purposes were (a) to examine differences in playing characteristics (in particular, the set and spike) between the Attack Process and the Counterattack Process; (b) to examine changes in playing characteristics as a function of team success (as indicated by single-game outcomes and by final tournament standings); and (c) to determine the best predictor, or a set of predictors, of team success among the selected skill components. Seventy-two sample games from the Third Federation of International Volleyball Cup men's competition were recorded using a computerized recording system. Results showed that the significant differences between Team Standing and Game Outcome were due to better performances on those skills used in the Counterattack Process. Among the eight selected skills, the block and spike were the most important in determining team success. The methodology used in this study and the subsequent results provide valuable aids for the coach in the evaluation of team performance and ultimately in the preparation of training sessions in volleyball. PMID:1574656

  8. Three-dimensional geological modelling and multivariate statistical analysis of water chemistry data to analyse and visualise aquifer structure and groundwater composition in the Wairau Plain, Marlborough District, New Zealand

    NASA Astrophysics Data System (ADS)

    Raiber, Matthias; White, Paul A.; Daughney, Christopher J.; Tschritter, Constanze; Davidson, Peter; Bainbridge, Sophie E.

    2012-05-01

    SummaryConcerns regarding groundwater contamination with nitrate and the long-term sustainability of groundwater resources have prompted the development of a multi-layered three-dimensional (3D) geological model to characterise the aquifer geometry of the Wairau Plain, Marlborough District, New Zealand. The 3D geological model which consists of eight litho-stratigraphic units has been subsequently used to synthesise hydrogeological and hydrogeochemical data for different aquifers in an approach that aims to demonstrate how integration of water chemistry data within the physical framework of a 3D geological model can help to better understand and conceptualise groundwater systems in complex geological settings. Multivariate statistical techniques (e.g. Principal Component Analysis and Hierarchical Cluster Analysis) were applied to groundwater chemistry data to identify hydrochemical facies which are characteristic of distinct evolutionary pathways and a common hydrologic history of groundwaters. Principal Component Analysis on hydrochemical data demonstrated that natural water-rock interactions, redox potential and human agricultural impact are the key controls of groundwater quality in the Wairau Plain. Hierarchical Cluster Analysis revealed distinct hydrochemical water quality groups in the Wairau Plain groundwater system. Visualisation of the results of the multivariate statistical analyses and distribution of groundwater nitrate concentrations in the context of aquifer lithology highlighted the link between groundwater chemistry and the lithology of host aquifers. The methodology followed in this study can be applied in a variety of hydrogeological settings to synthesise geological, hydrogeological and hydrochemical data and present them in a format readily understood by a wide range of stakeholders. This enables a more efficient communication of the results of scientific studies to the wider community.

  9. In situ sulfur isotopes (δ(34)S and δ(33)S) analyses in sulfides and elemental sulfur using high sensitivity cones combined with the addition of nitrogen by laser ablation MC-ICP-MS.

    PubMed

    Fu, Jiali; Hu, Zhaochu; Zhang, Wen; Yang, Lu; Liu, Yongsheng; Li, Ming; Zong, Keqing; Gao, Shan; Hu, Shenghong

    2016-03-10

    The sulfur isotope is an important geochemical tracer in diverse fields of geosciences. In this study, the effects of three different cone combinations with the addition of N2 on the performance of in situ S isotope analyses were investigated in detail. The signal intensities of S isotopes were improved by a factor of 2.3 and 3.6 using the X skimmer cone combined with the standard sample cone or the Jet sample cone, respectively, compared with the standard arrangement (H skimmer cone combined with the standard sample cone). This signal enhancement is important for the improvement of the precision and accuracy of in situ S isotope analysis at high spatial resolution. Different cone combinations have a significant effect on the mass bias and mass bias stability for S isotopes. Poor precisions of S isotope ratios were obtained using the Jet and X cones combination at their corresponding optimum makeup gas flow when using Ar plasma only. The addition of 4-8 ml min(-1) nitrogen to the central gas flow in laser ablation MC-ICP-MS was found to significantly enlarge the mass bias stability zone at their corresponding optimum makeup gas flow in these three different cone combinations. The polyatomic interferences of OO, SH, OOH were also significantly reduced, and the interference free plateaus of sulfur isotopes became broader and flatter in the nitrogen mode (N2 = 4 ml min(-1)). However, the signal intensity of S was not increased by the addition of nitrogen in this study. The laser fluence and ablation mode had significant effects on sulfur isotope fractionation during the analysis of sulfides and elemental sulfur by laser ablation MC-ICP-MS. The matrix effect among different sulfides and elemental sulfur was observed, but could be significantly reduced by line scan ablation in preference to single spot ablation under the optimized fluence. It is recommended that the d90 values of the particles in pressed powder pellets for accurate and precise S isotope analysis

  10. In situ sulfur isotopes (δ(34)S and δ(33)S) analyses in sulfides and elemental sulfur using high sensitivity cones combined with the addition of nitrogen by laser ablation MC-ICP-MS.

    PubMed

    Fu, Jiali; Hu, Zhaochu; Zhang, Wen; Yang, Lu; Liu, Yongsheng; Li, Ming; Zong, Keqing; Gao, Shan; Hu, Shenghong

    2016-03-10

    The sulfur isotope is an important geochemical tracer in diverse fields of geosciences. In this study, the effects of three different cone combinations with the addition of N2 on the performance of in situ S isotope analyses were investigated in detail. The signal intensities of S isotopes were improved by a factor of 2.3 and 3.6 using the X skimmer cone combined with the standard sample cone or the Jet sample cone, respectively, compared with the standard arrangement (H skimmer cone combined with the standard sample cone). This signal enhancement is important for the improvement of the precision and accuracy of in situ S isotope analysis at high spatial resolution. Different cone combinations have a significant effect on the mass bias and mass bias stability for S isotopes. Poor precisions of S isotope ratios were obtained using the Jet and X cones combination at their corresponding optimum makeup gas flow when using Ar plasma only. The addition of 4-8 ml min(-1) nitrogen to the central gas flow in laser ablation MC-ICP-MS was found to significantly enlarge the mass bias stability zone at their corresponding optimum makeup gas flow in these three different cone combinations. The polyatomic interferences of OO, SH, OOH were also significantly reduced, and the interference free plateaus of sulfur isotopes became broader and flatter in the nitrogen mode (N2 = 4 ml min(-1)). However, the signal intensity of S was not increased by the addition of nitrogen in this study. The laser fluence and ablation mode had significant effects on sulfur isotope fractionation during the analysis of sulfides and elemental sulfur by laser ablation MC-ICP-MS. The matrix effect among different sulfides and elemental sulfur was observed, but could be significantly reduced by line scan ablation in preference to single spot ablation under the optimized fluence. It is recommended that the d90 values of the particles in pressed powder pellets for accurate and precise S isotope analysis

  11. Addition of docetaxel or bisphosphonates to standard of care in men with localised or metastatic, hormone-sensitive prostate cancer: a systematic review and meta-analyses of aggregate data

    PubMed Central

    Vale, Claire L; Burdett, Sarah; Rydzewska, Larysa H M; Albiges, Laurence; Clarke, Noel W; Fisher, David; Fizazi, Karim; Gravis, Gwenaelle; James, Nicholas D; Mason, Malcolm D; Parmar, Mahesh K B; Sweeney, Christopher J; Sydes, Matthew R; Tombal, Bertrand; Tierney, Jayne F

    2016-01-01

    docetaxel for men with locally advanced disease (M0). Survival results from three (GETUG-12, RTOG 0521, STAMPEDE) of these trials (2121 [53%] of 3978 men) showed no evidence of a benefit from the addition of docetaxel (HR 0·87 [95% CI 0·69–1·09]; p=0·218), whereas failure-free survival data from four (GETUG-12, RTOG 0521, STAMPEDE, TAX 3501) of these trials (2348 [59%] of 3978 men) showed that docetaxel improved failure-free survival (0·70 [0·61–0·81]; p<0·0001), which translates into a reduced absolute 4-year failure rate of 8% (5–10). We identified seven eligible randomised controlled trials of bisphosphonates for men with M1 disease. Survival results from three of these trials (2740 [88%] of 3109 men) showed that addition of bisphosphonates improved survival (0·88 [0·79–0·98]; p=0·025), which translates to 5% (1–8) absolute improvement, but this result was influenced by the positive result of one trial of sodium clodronate, and we found no evidence of a benefit from the addition of zoledronic acid (0·94 [0·83–1·07]; p=0·323), which translates to an absolute improvement in survival of 2% (−3 to 7). Of 17 trials of bisphosphonates for men with M0 disease, survival results from four trials (4079 [66%] of 6220 men) showed no evidence of benefit from the addition of bisphosphonates (1·03 [0·89–1·18]; p=0·724) or zoledronic acid (0·98 [0·82–1·16]; p=0·782). Failure-free survival definitions were too inconsistent for formal meta-analyses for the bisphosphonate trials. Interpretation The addition of docetaxel to standard of care should be considered standard care for men with M1 hormone-sensitive prostate cancer who are starting treatment for the first time. More evidence on the effects of docetaxel on survival is needed in the M0 disease setting. No evidence exists to suggest that zoledronic acid improves survival in men with

  12. Addition of docetaxel or bisphosphonates to standard of care in men with localised or metastatic, hormone-sensitive prostate cancer: a systematic review and meta-analyses of aggregate data

    PubMed Central

    Vale, Claire L; Burdett, Sarah; Rydzewska, Larysa H M; Albiges, Laurence; Clarke, Noel W; Fisher, David; Fizazi, Karim; Gravis, Gwenaelle; James, Nicholas D; Mason, Malcolm D; Parmar, Mahesh K B; Sweeney, Christopher J; Sydes, Matthew R; Tombal, Bertrand; Tierney, Jayne F

    2016-01-01

    docetaxel for men with locally advanced disease (M0). Survival results from three (GETUG-12, RTOG 0521, STAMPEDE) of these trials (2121 [53%] of 3978 men) showed no evidence of a benefit from the addition of docetaxel (HR 0·87 [95% CI 0·69–1·09]; p=0·218), whereas failure-free survival data from four (GETUG-12, RTOG 0521, STAMPEDE, TAX 3501) of these trials (2348 [59%] of 3978 men) showed that docetaxel improved failure-free survival (0·70 [0·61–0·81]; p<0·0001), which translates into a reduced absolute 4-year failure rate of 8% (5–10). We identified seven eligible randomised controlled trials of bisphosphonates for men with M1 disease. Survival results from three of these trials (2740 [88%] of 3109 men) showed that addition of bisphosphonates improved survival (0·88 [0·79–0·98]; p=0·025), which translates to 5% (1–8) absolute improvement, but this result was influenced by the positive result of one trial of sodium clodronate, and we found no evidence of a benefit from the addition of zoledronic acid (0·94 [0·83–1·07]; p=0·323), which translates to an absolute improvement in survival of 2% (−3 to 7). Of 17 trials of bisphosphonates for men with M0 disease, survival results from four trials (4079 [66%] of 6220 men) showed no evidence of benefit from the addition of bisphosphonates (1·03 [0·89–1·18]; p=0·724) or zoledronic acid (0·98 [0·82–1·16]; p=0·782). Failure-free survival definitions were too inconsistent for formal meta-analyses for the bisphosphonate trials. Interpretation The addition of docetaxel to standard of care should be considered standard care for men with M1 hormone-sensitive prostate cancer who are starting treatment for the first time. More evidence on the effects of docetaxel on survival is needed in the M0 disease setting. No evidence exists to suggest that zoledronic acid improves survival in men with M1 or M0 disease, and any potential benefit is probably small. Funding Medical Research Council UK. PMID

  13. Sociopolitical Analyses.

    ERIC Educational Resources Information Center

    Van Galen, Jane, Ed.; And Others

    1992-01-01

    This theme issue of the serial "Educational Foundations" contains four articles devoted to the topic of "Sociopolitical Analyses." In "An Interview with Peter L. McLaren," Mary Leach presented the views of Peter L. McLaren on topics of local and national discourses, values, and the politics of difference. Landon E. Beyer's "Educational Studies and…

  14. [Statistical analysis using freely-available "EZR (Easy R)" software].

    PubMed

    Kanda, Yoshinobu

    2015-10-01

    Clinicians must often perform statistical analyses for purposes such evaluating preexisting evidence and designing or executing clinical studies. R is a free software environment for statistical computing. R supports many statistical analysis functions, but does not incorporate a statistical graphical user interface (GUI). The R commander provides an easy-to-use basic-statistics GUI for R. However, the statistical function of the R commander is limited, especially in the field of biostatistics. Therefore, the author added several important statistical functions to the R commander and named it "EZR (Easy R)", which is now being distributed on the following website: http://www.jichi.ac.jp/saitama-sct/. EZR allows the application of statistical functions that are frequently used in clinical studies, such as survival analyses, including competing risk analyses and the use of time-dependent covariates and so on, by point-and-click access. In addition, by saving the script automatically created by EZR, users can learn R script writing, maintain the traceability of the analysis, and assure that the statistical process is overseen by a supervisor.

  15. Titanic: A Statistical Exploration.

    ERIC Educational Resources Information Center

    Takis, Sandra L.

    1999-01-01

    Uses the available data about the Titanic's passengers to interest students in exploring categorical data and the chi-square distribution. Describes activities incorporated into a statistics class and gives additional resources for collecting information about the Titanic. (ASK)

  16. Morbidity statistics

    PubMed Central

    Smith, Alwyn

    1969-01-01

    This paper is based on an analysis of questionnaires sent to the health ministries of Member States of WHO asking for information about the extent, nature, and scope of morbidity statistical information. It is clear that most countries collect some statistics of morbidity and many countries collect extensive data. However, few countries relate their collection to the needs of health administrators for information, and many countries collect statistics principally for publication in annual volumes which may appear anything up to 3 years after the year to which they refer. The desiderata of morbidity statistics may be summarized as reliability, representativeness, and relevance to current health problems. PMID:5306722

  17. Illustrating the practice of statistics

    SciTech Connect

    Hamada, Christina A; Hamada, Michael S

    2009-01-01

    The practice of statistics involves analyzing data and planning data collection schemes to answer scientific questions. Issues often arise with the data that must be dealt with and can lead to new procedures. In analyzing data, these issues can sometimes be addressed through the statistical models that are developed. Simulation can also be helpful in evaluating a new procedure. Moreover, simulation coupled with optimization can be used to plan a data collection scheme. The practice of statistics as just described is much more than just using a statistical package. In analyzing the data, it involves understanding the scientific problem and incorporating the scientist's knowledge. In modeling the data, it involves understanding how the data were collected and accounting for limitations of the data where possible. Moreover, the modeling is likely to be iterative by considering a series of models and evaluating the fit of these models. Designing a data collection scheme involves understanding the scientist's goal and staying within hislher budget in terms of time and the available resources. Consequently, a practicing statistician is faced with such tasks and requires skills and tools to do them quickly. We have written this article for students to provide a glimpse of the practice of statistics. To illustrate the practice of statistics, we consider a problem motivated by some precipitation data that our relative, Masaru Hamada, collected some years ago. We describe his rain gauge observational study in Section 2. We describe modeling and an initial analysis of the precipitation data in Section 3. In Section 4, we consider alternative analyses that address potential issues with the precipitation data. In Section 5, we consider the impact of incorporating additional infonnation. We design a data collection scheme to illustrate the use of simulation and optimization in Section 6. We conclude this article in Section 7 with a discussion.

  18. Statistics: A Brief Overview

    PubMed Central

    Winters, Ryan; Winters, Andrew; Amedee, Ronald G.

    2010-01-01

    The Accreditation Council for Graduate Medical Education sets forth a number of required educational topics that must be addressed in residency and fellowship programs. We sought to provide a primer on some of the important basic statistical concepts to consider when examining the medical literature. It is not essential to understand the exact workings and methodology of every statistical test encountered, but it is necessary to understand selected concepts such as parametric and nonparametric tests, correlation, and numerical versus categorical data. This working knowledge will allow you to spot obvious irregularities in statistical analyses that you encounter. PMID:21603381

  19. Informal Statistics Help Desk

    NASA Technical Reports Server (NTRS)

    Young, M.; Koslovsky, M.; Schaefer, Caroline M.; Feiveson, A. H.

    2017-01-01

    Back by popular demand, the JSC Biostatistics Laboratory and LSAH statisticians are offering an opportunity to discuss your statistical challenges and needs. Take the opportunity to meet the individuals offering expert statistical support to the JSC community. Join us for an informal conversation about any questions you may have encountered with issues of experimental design, analysis, or data visualization. Get answers to common questions about sample size, repeated measures, statistical assumptions, missing data, multiple testing, time-to-event data, and when to trust the results of your analyses.

  20. Environmental restoration and statistics: Issues and needs

    SciTech Connect

    Gilbert, R.O.

    1991-10-01

    Statisticians have a vital role to play in environmental restoration (ER) activities. One facet of that role is to point out where additional work is needed to develop statistical sampling plans and data analyses that meet the needs of ER. This paper is an attempt to show where statistics fits into the ER process. The statistician, as member of the ER planning team, works collaboratively with the team to develop the site characterization sampling design, so that data of the quality and quantity required by the specified data quality objectives (DQOs) are obtained. At the same time, the statistician works with the rest of the planning team to design and implement, when appropriate, the observational approach to streamline the ER process and reduce costs. The statistician will also provide the expertise needed to select or develop appropriate tools for statistical analysis that are suited for problems that are common to waste-site data. These data problems include highly heterogeneous waste forms, large variability in concentrations over space, correlated data, data that do not have a normal (Gaussian) distribution, and measurements below detection limits. Other problems include environmental transport and risk models that yield highly uncertain predictions, and the need to effectively communicate to the public highly technical information, such as sampling plans, site characterization data, statistical analysis results, and risk estimates. Even though some statistical analysis methods are available off the shelf'' for use in ER, these problems require the development of additional statistical tools, as discussed in this paper. 29 refs.

  1. Lehrer in der Bundesrepublik Deutschland. Eine Kritische Analyse Statistischer Daten uber das Lehrpersonal an Allgemeinbildenden Schulen. (Education in the Federal Republic of Germany. A Statistical Study of Teachers in Schools of General Education.)

    ERIC Educational Resources Information Center

    Kohler, Helmut

    The purpose of this study was to analyze the available statistics concerning teachers in schools of general education in the Federal Republic of Germany. An analysis of the demographic structure of the pool of full-time teachers showed that in 1971 30 percent of the teachers were under age 30, and 50 percent were under age 35. It was expected that…

  2. Statistics for Learning Genetics

    ERIC Educational Resources Information Center

    Charles, Abigail Sheena

    2012-01-01

    This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in,…

  3. Food additives

    PubMed Central

    Spencer, Michael

    1974-01-01

    Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857

  4. SEER Statistics

    Cancer.gov

    The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute works to provide information on cancer statistics in an effort to reduce the burden of cancer among the U.S. population.

  5. Cancer Statistics

    MedlinePlus

    ... cancer statistics across the world. U.S. Cancer Mortality Trends The best indicator of progress against cancer is ... the number of cancer survivors has increased. These trends show that progress is being made against the ...

  6. Statistical Physics

    NASA Astrophysics Data System (ADS)

    Hermann, Claudine

    Statistical Physics bridges the properties of a macroscopic system and the microscopic behavior of its constituting particles, otherwise impossible due to the giant magnitude of Avogadro's number. Numerous systems of today's key technologies - such as semiconductors or lasers - are macroscopic quantum objects; only statistical physics allows for understanding their fundamentals. Therefore, this graduate text also focuses on particular applications such as the properties of electrons in solids with applications, and radiation thermodynamics and the greenhouse effect.

  7. Implementation of quality by design principles in the development of microsponges as drug delivery carriers: Identification and optimization of critical factors using multivariate statistical analyses and design of experiments studies.

    PubMed

    Simonoska Crcarevska, Maja; Dimitrovska, Aneta; Sibinovska, Nadica; Mladenovska, Kristina; Slavevska Raicki, Renata; Glavas Dodov, Marija

    2015-07-15

    Microsponges drug delivery system (MDDC) was prepared by double emulsion-solvent-diffusion technique using rotor-stator homogenization. Quality by design (QbD) concept was implemented for the development of MDDC with potential to be incorporated into semisolid dosage form (gel). Quality target product profile (QTPP) and critical quality attributes (CQA) were defined and identified, accordingly. Critical material attributes (CMA) and Critical process parameters (CPP) were identified using quality risk management (QRM) tool, failure mode, effects and criticality analysis (FMECA). CMA and CPP were identified based on results obtained from principal component analysis (PCA-X&Y) and partial least squares (PLS) statistical analysis along with literature data, product and process knowledge and understanding. FMECA identified amount of ethylcellulose, chitosan, acetone, dichloromethane, span 80, tween 80 and water ratio in primary/multiple emulsions as CMA and rotation speed and stirrer type used for organic solvent removal as CPP. The relationship between identified CPP and particle size as CQA was described in the design space using design of experiments - one-factor response surface method. Obtained results from statistically designed experiments enabled establishment of mathematical models and equations that were used for detailed characterization of influence of identified CPP upon MDDC particle size and particle size distribution and their subsequent optimization.

  8. Implementation of quality by design principles in the development of microsponges as drug delivery carriers: Identification and optimization of critical factors using multivariate statistical analyses and design of experiments studies.

    PubMed

    Simonoska Crcarevska, Maja; Dimitrovska, Aneta; Sibinovska, Nadica; Mladenovska, Kristina; Slavevska Raicki, Renata; Glavas Dodov, Marija

    2015-07-15

    Microsponges drug delivery system (MDDC) was prepared by double emulsion-solvent-diffusion technique using rotor-stator homogenization. Quality by design (QbD) concept was implemented for the development of MDDC with potential to be incorporated into semisolid dosage form (gel). Quality target product profile (QTPP) and critical quality attributes (CQA) were defined and identified, accordingly. Critical material attributes (CMA) and Critical process parameters (CPP) were identified using quality risk management (QRM) tool, failure mode, effects and criticality analysis (FMECA). CMA and CPP were identified based on results obtained from principal component analysis (PCA-X&Y) and partial least squares (PLS) statistical analysis along with literature data, product and process knowledge and understanding. FMECA identified amount of ethylcellulose, chitosan, acetone, dichloromethane, span 80, tween 80 and water ratio in primary/multiple emulsions as CMA and rotation speed and stirrer type used for organic solvent removal as CPP. The relationship between identified CPP and particle size as CQA was described in the design space using design of experiments - one-factor response surface method. Obtained results from statistically designed experiments enabled establishment of mathematical models and equations that were used for detailed characterization of influence of identified CPP upon MDDC particle size and particle size distribution and their subsequent optimization. PMID:25895722

  9. SNS shielding analyses overview

    SciTech Connect

    Popova, Irina; Gallmeier, Franz; Iverson, Erik B; Lu, Wei; Remec, Igor

    2015-01-01

    This paper gives an overview on on-going shielding analyses for Spallation Neutron Source. Presently, the most of the shielding work is concentrated on the beam lines and instrument enclosures to prepare for commissioning, save operation and adequate radiation background in the future. There is on-going work for the accelerator facility. This includes radiation-protection analyses for radiation monitors placement, designing shielding for additional facilities to test accelerator structures, redesigning some parts of the facility, and designing test facilities to the main accelerator structure for component testing. Neutronics analyses are required as well to support spent structure management, including waste characterisation analyses, choice of proper transport/storage package and shielding enhancement for the package if required.

  10. Lidar Analyses

    NASA Technical Reports Server (NTRS)

    Spiers, Gary D.

    1995-01-01

    A brief description of enhancements made to the NASA MSFC coherent lidar model is provided. Notable improvements are the addition of routines to automatically determine the 3 dB misalignment loss angle and the backscatter value at which the probability of a good estimate (for a maximum likelihood estimator) falls to 50%. The ability to automatically generate energy/aperture parametrization (EAP) plots which include the effects of angular misalignment has been added. These EAP plots make it very easy to see that for any practical system where there is some degree of misalignment then there is an optimum telescope diameter for which the laser pulse energy required to achieve a particular sensitivity is minimized. Increasing the telescope diameter above this will result in a reduction of sensitivity. These parameterizations also clearly show that the alignment tolerances at shorter wavelengths are much stricter than those at longer wavelengths. A brief outline of the NASA MSFC AEOLUS program is given and a summary of the lidar designs considered during the program is presented. A discussion of some of the design trades is performed both in the text and in a conference publication attached as an appendix.

  11. Statistical Optics

    NASA Astrophysics Data System (ADS)

    Goodman, Joseph W.

    2000-07-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research

  12. Spacelab Charcoal Analyses

    NASA Technical Reports Server (NTRS)

    Slivon, L. E.; Hernon-Kenny, L. A.; Katona, V. R.; Dejarme, L. E.

    1995-01-01

    This report describes analytical methods and results obtained from chemical analysis of 31 charcoal samples in five sets. Each set was obtained from a single scrubber used to filter ambient air on board a Spacelab mission. Analysis of the charcoal samples was conducted by thermal desorption followed by gas chromatography/mass spectrometry (GC/MS). All samples were analyzed using identical methods. The method used for these analyses was able to detect compounds independent of their polarity or volatility. In addition to the charcoal samples, analyses of three Environmental Control and Life Support System (ECLSS) water samples were conducted specifically for trimethylamine.

  13. Statistical Modelling of Compound Floods

    NASA Astrophysics Data System (ADS)

    Bevacqua, Emanuele; Maraun, Douglas; Vrac, Mathieu; Widmann, Martin; Manning, Colin

    2016-04-01

    of interest. This is based on real data for River discharge (Y RIV ER') and Sea level (Y SEA), from the River Têt in south of France. The impact of the compound flood is the water level in the area between the River and Sea station, which we define here as h = αY RIV ER + (1 ‑ α)Y SEA. Here we show the sensitivity of the system to a changes in the two physical parameters. Through variations in α we can study the system in one or two dimensions which allows for the assessment of the risk associated with either of the two variables alone or with a combination of them. Varying instead the second parameter, i.e. the dependence among the variables Y RIV ER and Y SEA, we show how an apparently weak dependence can increase the risk of flooding significantly with respect to the independent case. The model can be applied to future climate inserting predictors into the statistical model as additional conditioning variables. Through conditioning the simulation of the statistical model on the predictors obtained for future projections from Climate Models, both the change of the risk and characteristics of compound floods for the future can be analysed.

  14. Candidate Assembly Statistical Evaluation

    1998-07-15

    The Savannah River Site (SRS) receives aluminum clad spent Material Test Reactor (MTR) fuel from all over the world for storage and eventual reprocessing. There are hundreds of different kinds of MTR fuels and these fuels will continue to be received at SRS for approximately ten more years. SRS''s current criticality evaluation methodology requires the modeling of all MTR fuels utilizing Monte Carlo codes, which is extremely time consuming and resource intensive. Now that amore » significant number of MTR calculations have been conducted it is feasible to consider building statistical models that will provide reasonable estimations of MTR behavior. These statistical models can be incorporated into a standardized model homogenization spreadsheet package to provide analysts with a means of performing routine MTR fuel analyses with a minimal commitment of time and resources. This became the purpose for development of the Candidate Assembly Statistical Evaluation (CASE) program at SRS.« less

  15. Phosphazene additives

    DOEpatents

    Harrup, Mason K; Rollins, Harry W

    2013-11-26

    An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.

  16. Potlining Additives

    SciTech Connect

    Rudolf Keller

    2004-08-10

    In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.

  17. Fingerprinting and source identification of an oil spill in China Bohai Sea by gas chromatography-flame ionization detection and gas chromatography-mass spectrometry coupled with multi-statistical analyses.

    PubMed

    Sun, Peiyan; Bao, Mutai; Li, Guangmei; Wang, Xinping; Zhao, Yuhui; Zhou, Qing; Cao, Lixin

    2009-01-30

    This paper describes a case study in which advanced chemical fingerprinting and data interpretation techniques were used to characterize the chemical composition and determine the source of an unknown spilled oil reported on the beach of China Bohai Sea in 2005. The spilled oil was suspected to be released from nearby platforms. In response to this specific site investigation need, a tiered analytical approach using gas chromatography-mass spectrometry (GC-MS) and gas chromatography-flame ionization detection (GC-FID) was applied. A variety of diagnostic ratios of "source-specific marker" compounds, in particular isomers of biomarkers, were determined and compared. Several statistical data correlation analysis methods were applied, including clustering analysis and Student's t-test method. The comparison of the two methods was conducted. The comprehensive analysis results reveal the following: (1) The oil fingerprinting of three spilled oil samples (S1, S2 and S3) positively match each other; (2) The three spilled oil samples have suffered different weathering, dominated by evaporation with decrease of the low-molecular-mass n-alkanes at different degrees; (3) The oil fingerprinting profiles of the three spilled oil samples are positive match with that of the suspected source oil samples C41, C42, C43, C44 and C45; (4) There are significant differences in the oil fingerprinting profiles between the three spilled oil samples and the suspected source oil samples A1, B1, B2, B3, B4, C1, C2, C3, C5 and C6.

  18. [Statistical materials].

    PubMed

    1986-01-01

    Official population data for the USSR are presented for 1985 and 1986. Part 1 (pp. 65-72) contains data on capitals of union republics and cities with over one million inhabitants, including population estimates for 1986 and vital statistics for 1985. Part 2 (p. 72) presents population estimates by sex and union republic, 1986. Part 3 (pp. 73-6) presents data on population growth, including birth, death, and natural increase rates, 1984-1985; seasonal distribution of births and deaths; birth order; age-specific birth rates in urban and rural areas and by union republic; marriages; age at marriage; and divorces. PMID:12178831

  19. Epidemiology of Type 1 Diabetes Mellitus in Korea through an Investigation of the National Registration Project of Type 1 Diabetes for the Reimbursement of Glucometer Strips with Additional Analyses Using Claims Data

    PubMed Central

    Song, Sun Ok; Nam, Joo Young; Park, Kyeong Hye; Yoon, Ji-Hae; Son, Kyung-Mi; Ko, Young; Lim, Dong-Ha

    2016-01-01

    Background The aim of this study was to estimate the prevalence and incidence of type 1 diabetes mellitus (T1DM) in Korea. In addition, we planned to do a performance analysis of the Registration Project of Type 1 diabetes for the reimbursement of consumable materials. Methods To obtain nationwide data on the incidence and prevalence of T1DM, we extracted claims data from July 2011 to August 2013 from the Registration Project of Type 1 diabetes on the reimbursement of consumable materials in the National Health Insurance (NHI) Database. For a more detailed analysis of the T1DM population in Korea, stratification by gender, age, and area was performed, and prevalence and incidence were calculated. Results Of the 8,256 subjects enrolled over the 26 months, the male to female ratio was 1 to 1.12, the median age was 37.1 years, and an average of 136 new T1DM patients were registered to the T1DM registry each month, resulting in 1,632 newly diagnosed T1DM patients each year. We found that the incidence rate of new T1DM cases was 3.28 per 100,000 people. The average proportion of T1DM patients compared with each region's population was 0.0125%. The total number of insurance subscribers under the universal compulsory NHI in Korea was 49,662,097, and the total number of diabetes patients, excluding duplication, was 3,762,332. Conclusion The prevalence of T1DM over the course of the study was approximately 0.017% to 0.021% of the entire population of Korea, and the annual incidence of T1DM was 3.28:100,000 overall and 3.25:100,000 for Koreans under 20 years old. PMID:26912154

  20. Information Omitted From Analyses.

    PubMed

    2015-08-01

    In the Original Article titled “Higher- Order Genetic and Environmental Structure of Prevalent Forms of Child and Adolescent Psychopathology” published in the February 2011 issue of JAMA Psychiatry (then Archives of General Psychiatry) (2011;68[2]:181-189), there were 2 errors. Although the article stated that the dimensions of psychopathology were measured using parent informants for inattention, hyperactivity-impulsivity, and oppositional defiant disorder, and a combination of parent and youth informants for conduct disorder, major depression, generalized anxiety disorder, separation anxiety disorder, social phobia, specific phobia, agoraphobia, and obsessive-compulsive disorder, all dimensional scores used in the reported analyses were actually based on parent reports of symptoms; youth reports were not used. In addition, whereas the article stated that each symptom dimension was residualized on age, sex, age-squared, and age by sex, the dimensions actually were only residualized on age, sex, and age-squared. All analyses were repeated using parent informants for inattention, hyperactivity-impulsivity, and oppositional defiant disorder, and a combination of parent and youth informants for conduct disorder,major depression, generalized anxiety disorder, separation anxiety disorder, social phobia, specific phobia, agoraphobia, and obsessive-compulsive disorder; these dimensional scores were residualized on age, age-squared, sex, sex by age, and sex by age-squared. The results of the new analyses were qualitatively the same as those reported in the article, with no substantial changes in conclusions. The only notable small difference was that major depression and generalized anxiety disorder dimensions had small but significant loadings on the internalizing factor in addition to their substantial loadings on the general factor in the analyses of both genetic and non-shared covariances in the selected models in the new analyses. Corrections were made to the

  1. Statistical analyses of plume composition and deposited radionuclide mixture ratios

    SciTech Connect

    Kraus, Terrence D.; Sallaberry, Cedric Jean-Marie; Eckert-Gallup, Aubrey Celia; Brito, Roxanne; Hunt, Brian D.; Osborn, Douglas M.

    2014-01-01

    A proposed method is considered to classify the regions in the close neighborhood of selected measurements according to the ratio of two radionuclides measured from either a radioactive plume or a deposited radionuclide mixture. The subsequent associated locations are then considered in the area of interest with a representative ratio class. This method allows for a more comprehensive and meaningful understanding of the data sampled following a radiological incident.

  2. Statistical considerations for grain-size analyses of tills

    USGS Publications Warehouse

    Jacobs, A.M.

    1971-01-01

    Relative percentages of sand, silt, and clay from samples of the same till unit are not identical because of different lithologies in the source areas, sorting in transport, random variation, and experimental error. Random variation and experimental error can be isolated from the other two as follows. For each particle-size class of each till unit, a standard population is determined by using a normally distributed, representative group of data. New measurements are compared with the standard population and, if they compare satisfactorily, the experimental error is not significant and random variation is within the expected range for the population. The outcome of the comparison depends on numerical criteria derived from a graphical method rather than on a more commonly used one-way analysis of variance with two treatments. If the number of samples and the standard deviation of the standard population are substituted in a t-test equation, a family of hyperbolas is generated, each of which corresponds to a specific number of subsamples taken from each new sample. The axes of the graphs of the hyperbolas are the standard deviation of new measurements (horizontal axis) and the difference between the means of the new measurements and the standard population (vertical axis). The area between the two branches of each hyperbola corresponds to a satisfactory comparison between the new measurements and the standard population. Measurements from a new sample can be tested by plotting their standard deviation vs. difference in means on axes containing a hyperbola corresponding to the specific number of subsamples used. If the point lies between the branches of the hyperbola, the measurements are considered reliable. But if the point lies outside this region, the measurements are repeated. Because the critical segment of the hyperbola is approximately a straight line parallel to the horizontal axis, the test is simplified to a comparison between the means of the standard population and the means of the subsample. The minimum number of subsamples required to prove significant variation between samples caused by different lithologies in the source areas and sorting in transport can be determined directly from the graphical method. The minimum number of subsamples required is the maximum number to be run for economy of effort. ?? 1971 Plenum Publishing Corporation.

  3. Statistical analyses of monozygotic and dizygotic twinning rates.

    PubMed

    Fellman, Johan

    2013-12-01

    The French mathematician Bertillon reasoned that the number of dizygotic (DZ) pairs would equal twice the number of twin pairs of unlike sexes. The remaining twin pairs in a sample would presumably be monozygotic (MZ). Weinberg restated this idea and the calculation has come to be known as Weinberg's differential rule (WDR). The keystone of WDR is that DZ twin pairs should be equally likely to be of the same or the opposite sex. Although the probability of a male birth is greater than .5, the reliability of WDR's assumptions has never been conclusively verified or rejected. Let the probability for an opposite-sex (OS) twin maternity be pO , for a same-sex (SS) twin maternity pS and, consequently, the probability for other maternities 1 - pS - pO . The parameter estimates $\\hat p_O$ and $\\hat p_S$ are relative frequencies. Applying WDR, the MZ rate is m = pS - pO and the DZ rate is d = 2pO , but the estimates $\\hat m$ and $\\hat d$ are not relative frequencies. The maximum likelihood estimators $\\hat p_S$ and $\\hat p_O$ are unbiased, efficient, and asymptotically normal. The linear transformations $\\hat m = \\hat p_S - \\hat p_O$ and ${\\skew6\\hat d} = 2\\hat p_O$ are efficient and asymptotically normal. If WDR holds they are also unbiased. For the tests of a set of m and d rates, contingency tables cannot be used. Alternative tests are presented and the models are applied on published data. PMID:24063661

  4. STATISTICAL ANALYSES ON THERMAL ASPECTS OF SOLAR FLARES

    SciTech Connect

    Li, Y. P.; Gan, W. Q.; Feng, L.

    2012-03-10

    The frequency distribution of flare energies provides a crucial diagnostic to calculate the overall energy residing in flares and to estimate the role of flares in coronal heating. It often takes a power law as its functional form. We have analyzed various variables, including the thermal energies E{sub th} of 1843 flares at their peak time. They were recorded by both Geostationary Operational Environmental Satellites and Reuven Ramaty High-Energy Solar Spectroscopic Imager during the time period from 2002 to 2009 and are classified as flares greater than C 1.0. The relationship between different flare parameters is investigated. It is found that fitting the frequency distribution of E{sub th} to a power law results in an index of -2.38. We also investigate the corrected thermal energy E{sub cth}, which represents the flare total thermal energy including the energy loss in the rising phase. Its corresponding power-law slope is -2.35. Compilation of the frequency distributions of the thermal energies from nanoflares, microflares, and flares in the present work and from other authors shows that power-law indices below -2.0 have covered the range from 10{sup 24} to 10{sup 32} erg. Whether this frequency distribution can provide sufficient energy to coronal heatings in active regions and the quiet Sun is discussed.

  5. Statistical concepts in metrology with a postscript on statistical graphics

    NASA Astrophysics Data System (ADS)

    Ku, Harry H.

    1988-08-01

    Statistical Concepts in Metrology was originally written as Chapter 2 for the Handbook of Industrial Metrology published by the American Society of Tool and Manufacturing Engineers, 1967. It was reprinted as one of 40 papers in NBS Special Publication 300, Volume 1, Precision Measurement and Calibration; Statistical Concepts and Procedures, 1969. Since then this chapter has been used as basic text in statistics in Bureau-sponsored courses and seminars, including those for Electricity, Electronics, and Analytical Chemistry. While concepts and techniques introduced in the original chapter remain valid and appropriate, some additions on recent development of graphical methods for the treatment of data would be useful. Graphical methods can be used effectively to explore information in data sets prior to the application of classical statistical procedures. For this reason additional sections on statistical graphics are added as a postscript.

  6. Assessments of feline plasma biochemistry reference intervals for three in-house analysers and a commercial laboratory analyser.

    PubMed

    Baral, Randolph M; Dhand, Navneet K; Krockenberger, Mark B; Govendir, Merran

    2015-08-01

    For each species, the manufacturers of in-house analysers (and commercial laboratories) provide standard reference intervals (RIs) that do not account for any differences such as geographical population differences and do not overtly state the potential for variation between results obtained from serum or plasma. Additionally, biases have been demonstrated for in-house analysers which result in different RIs for each different type of analyser. The objective of this study was to calculate RIs (with 90% confidence intervals [CIs]) for 13 biochemistry analytes when tested on three commonly used in-house veterinary analysers, as well as a commercial laboratory analyser. The calculated RIs were then compared with those provided by the in-house analyser manufacturers and the commercial laboratory. Plasma samples were collected from 53 clinically normal cats. After centrifugation, plasma was divided into four aliquots; one aliquot was sent to the commercial laboratory and the remaining three were tested using the in-house biochemistry analysers. The distribution of results was used to choose the appropriate statistical technique for each analyte from each analyser to calculate RIs. Provided reference limits were deemed appropriate if they fell within the 90% CIs of the calculated reference limits. Transference validation was performed on provided and calculated RIs. Twenty-nine of a possible 102 provided reference limits (28%) were within the calculated 90% CIs. To ensure proper interpretation of laboratory results, practitioners should determine RIs for their practice populations and/or use reference change values when assessing their patients' clinical chemistry results.

  7. Students' attitudes towards learning statistics

    NASA Astrophysics Data System (ADS)

    Ghulami, Hassan Rahnaward; Hamid, Mohd Rashid Ab; Zakaria, Roslinazairimah

    2015-05-01

    Positive attitude towards learning is vital in order to master the core content of the subject matters under study. This is unexceptional in learning statistics course especially at the university level. Therefore, this study investigates the students' attitude towards learning statistics. Six variables or constructs have been identified such as affect, cognitive competence, value, difficulty, interest, and effort. The instrument used for the study is questionnaire that was adopted and adapted from the reliable instrument of Survey of Attitudes towards Statistics(SATS©). This study is conducted to engineering undergraduate students in one of the university in the East Coast of Malaysia. The respondents consist of students who were taking the applied statistics course from different faculties. The results are analysed in terms of descriptive analysis and it contributes to the descriptive understanding of students' attitude towards the teaching and learning process of statistics.

  8. Perception in statistical graphics

    NASA Astrophysics Data System (ADS)

    VanderPlas, Susan Ruth

    There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

  9. Cosmetic Plastic Surgery Statistics

    MedlinePlus

    2014 Cosmetic Plastic Surgery Statistics Cosmetic Procedure Trends 2014 Plastic Surgery Statistics Report Please credit the AMERICAN SOCIETY OF PLASTIC SURGEONS when citing statistical data or using ...

  10. A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.

    PubMed

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L

    2014-01-01

    We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.

  11. Statistical Prediction in Proprietary Rehabilitation.

    ERIC Educational Resources Information Center

    Johnson, Kurt L.; And Others

    1987-01-01

    Applied statistical methods to predict case expenditures for low back pain rehabilitation cases in proprietary rehabilitation. Extracted predictor variables from case records of 175 workers compensation claimants with some degree of permanent disability due to back injury. Performed several multiple regression analyses resulting in a formula that…

  12. Misuse of statistics in surgical literature

    PubMed Central

    Ronna, Brenden; Robbins, Riann B.

    2016-01-01

    Statistical analyses are a key part of biomedical research. Traditionally surgical research has relied upon a few statistical methods for evaluation and interpretation of data to improve clinical practice. As research methods have increased in both rigor and complexity, statistical analyses and interpretation have fallen behind. Some evidence suggests that surgical research studies are being designed and analyzed improperly given the specific study question. The goal of this article is to discuss the complexities of surgical research analyses and interpretation, and provide some resources to aid in these processes. PMID:27621909

  13. Misuse of statistics in surgical literature.

    PubMed

    Thiese, Matthew S; Ronna, Brenden; Robbins, Riann B

    2016-08-01

    Statistical analyses are a key part of biomedical research. Traditionally surgical research has relied upon a few statistical methods for evaluation and interpretation of data to improve clinical practice. As research methods have increased in both rigor and complexity, statistical analyses and interpretation have fallen behind. Some evidence suggests that surgical research studies are being designed and analyzed improperly given the specific study question. The goal of this article is to discuss the complexities of surgical research analyses and interpretation, and provide some resources to aid in these processes.

  14. Misuse of statistics in surgical literature.

    PubMed

    Thiese, Matthew S; Ronna, Brenden; Robbins, Riann B

    2016-08-01

    Statistical analyses are a key part of biomedical research. Traditionally surgical research has relied upon a few statistical methods for evaluation and interpretation of data to improve clinical practice. As research methods have increased in both rigor and complexity, statistical analyses and interpretation have fallen behind. Some evidence suggests that surgical research studies are being designed and analyzed improperly given the specific study question. The goal of this article is to discuss the complexities of surgical research analyses and interpretation, and provide some resources to aid in these processes. PMID:27621909

  15. Misuse of statistics in surgical literature

    PubMed Central

    Ronna, Brenden; Robbins, Riann B.

    2016-01-01

    Statistical analyses are a key part of biomedical research. Traditionally surgical research has relied upon a few statistical methods for evaluation and interpretation of data to improve clinical practice. As research methods have increased in both rigor and complexity, statistical analyses and interpretation have fallen behind. Some evidence suggests that surgical research studies are being designed and analyzed improperly given the specific study question. The goal of this article is to discuss the complexities of surgical research analyses and interpretation, and provide some resources to aid in these processes.

  16. One-dimensional statistical parametric mapping in Python.

    PubMed

    Pataky, Todd C

    2012-01-01

    Statistical parametric mapping (SPM) is a topological methodology for detecting field changes in smooth n-dimensional continua. Many classes of biomechanical data are smooth and contained within discrete bounds and as such are well suited to SPM analyses. The current paper accompanies release of 'SPM1D', a free and open-source Python package for conducting SPM analyses on a set of registered 1D curves. Three example applications are presented: (i) kinematics, (ii) ground reaction forces and (iii) contact pressure distribution in probabilistic finite element modelling. In addition to offering a high-level interface to a variety of common statistical tests like t tests, regression and ANOVA, SPM1D also emphasises fundamental concepts of SPM theory through stand-alone example scripts. Source code and documentation are available at: www.tpataky.net/spm1d/.

  17. NASA Pocket Statistics

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA Pocket Statistics is published for the use of NASA managers and their staff. Included herein is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, and NASA Procurement, Financial, and Manpower data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.

  18. NASA Pocket Statistics

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Pocket Statistics is published for the use of NASA managers and their staff. Included herein is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, and NASA Procurement, Financial, and Manpower data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.

  19. NASA Pocket Statistics

    NASA Technical Reports Server (NTRS)

    1996-01-01

    This booklet of pocket statistics includes the 1996 NASA Major Launch Record, NASA Procurement, Financial, and Workforce data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Luanch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.

  20. Statistical ecology comes of age

    PubMed Central

    Gimenez, Olivier; Buckland, Stephen T.; Morgan, Byron J. T.; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M.; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M.; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-01-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1–4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data. PMID:25540151

  1. Statistical ecology comes of age.

    PubMed

    Gimenez, Olivier; Buckland, Stephen T; Morgan, Byron J T; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-12-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1-4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data.

  2. SOCR: Statistics Online Computational Resource

    PubMed Central

    Dinov, Ivo D.

    2011-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR). This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student’s intuition and enhance their learning. PMID:21451741

  3. Predict! Teaching Statistics Using Informational Statistical Inference

    ERIC Educational Resources Information Center

    Makar, Katie

    2013-01-01

    Statistics is one of the most widely used topics for everyday life in the school mathematics curriculum. Unfortunately, the statistics taught in schools focuses on calculations and procedures before students have a chance to see it as a useful and powerful tool. Researchers have found that a dominant view of statistics is as an assortment of tools…

  4. Limitations of Using Microsoft Excel Version 2016 (MS Excel 2016) for Statistical Analysis for Medical Research.

    PubMed

    Tanavalee, Chotetawan; Luksanapruksa, Panya; Singhatanadgige, Weerasak

    2016-06-01

    Microsoft Excel (MS Excel) is a commonly used program for data collection and statistical analysis in biomedical research. However, this program has many limitations, including fewer functions that can be used for analysis and a limited number of total cells compared with dedicated statistical programs. MS Excel cannot complete analyses with blank cells, and cells must be selected manually for analysis. In addition, it requires multiple steps of data transformation and formulas to plot survival analysis graphs, among others. The Megastat add-on program, which will be supported by MS Excel 2016 soon, would eliminate some limitations of using statistic formulas within MS Excel.

  5. Measuring statistical evidence using relative belief

    PubMed Central

    Evans, Michael

    2016-01-01

    A fundamental concern of a theory of statistical inference is how one should measure statistical evidence. Certainly the words “statistical evidence,” or perhaps just “evidence,” are much used in statistical contexts. It is fair to say, however, that the precise characterization of this concept is somewhat elusive. Our goal here is to provide a definition of how to measure statistical evidence for any particular statistical problem. Since evidence is what causes beliefs to change, it is proposed to measure evidence by the amount beliefs change from a priori to a posteriori. As such, our definition involves prior beliefs and this raises issues of subjectivity versus objectivity in statistical analyses. This is dealt with through a principle requiring the falsifiability of any ingredients to a statistical analysis. These concerns lead to checking for prior-data conflict and measuring the a priori bias in a prior. PMID:26925207

  6. Statistics for People Who (Think They) Hate Statistics. Third Edition

    ERIC Educational Resources Information Center

    Salkind, Neil J.

    2007-01-01

    This text teaches an often intimidating and difficult subject in a way that is informative, personable, and clear. The author takes students through various statistical procedures, beginning with correlation and graphical representation of data and ending with inferential techniques and analysis of variance. In addition, the text covers SPSS, and…

  7. Statistical Reference Datasets

    National Institute of Standards and Technology Data Gateway

    Statistical Reference Datasets (Web, free access)   The Statistical Reference Datasets is also supported by the Standard Reference Data Program. The purpose of this project is to improve the accuracy of statistical software by providing reference datasets with certified computational results that enable the objective evaluation of statistical software.

  8. Explorations in statistics: statistical facets of reproducibility.

    PubMed

    Curran-Everett, Douglas

    2016-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This eleventh installment of Explorations in Statistics explores statistical facets of reproducibility. If we obtain an experimental result that is scientifically meaningful and statistically unusual, we would like to know that our result reflects a general biological phenomenon that another researcher could reproduce if (s)he repeated our experiment. But more often than not, we may learn this researcher cannot replicate our result. The National Institutes of Health and the Federation of American Societies for Experimental Biology have created training modules and outlined strategies to help improve the reproducibility of research. These particular approaches are necessary, but they are not sufficient. The principles of hypothesis testing and estimation are inherent to the notion of reproducibility in science. If we want to improve the reproducibility of our research, then we need to rethink how we apply fundamental concepts of statistics to our science.

  9. Introductory Statistics and Fish Management.

    ERIC Educational Resources Information Center

    Jardine, Dick

    2002-01-01

    Describes how fisheries research and management data (available on a website) have been incorporated into an Introductory Statistics course. In addition to the motivation gained from seeing the practical relevance of the course, some students have participated in the data collection and analysis for the New Hampshire Fish and Game Department. (MM)

  10. The disagreeable behaviour of the kappa statistic.

    PubMed

    Flight, Laura; Julious, Steven A

    2015-01-01

    It is often of interest to measure the agreement between a number of raters when an outcome is nominal or ordinal. The kappa statistic is used as a measure of agreement. The statistic is highly sensitive to the distribution of the marginal totals and can produce unreliable results. Other statistics such as the proportion of concordance, maximum attainable kappa and prevalence and bias adjusted kappa should be considered to indicate how well the kappa statistic represents agreement in the data. Each kappa should be considered and interpreted based on the context of the data being analysed.

  11. Statistical quality control through overall vibration analysis

    NASA Astrophysics Data System (ADS)

    Carnero, M. a. Carmen; González-Palma, Rafael; Almorza, David; Mayorga, Pedro; López-Escobar, Carlos

    2010-05-01

    The present study introduces the concept of statistical quality control in automotive wheel bearings manufacturing processes. Defects on products under analysis can have a direct influence on passengers' safety and comfort. At present, the use of vibration analysis on machine tools for quality control purposes is not very extensive in manufacturing facilities. Noise and vibration are common quality problems in bearings. These failure modes likely occur under certain operating conditions and do not require high vibration amplitudes but relate to certain vibration frequencies. The vibration frequencies are affected by the type of surface problems (chattering) of ball races that are generated through grinding processes. The purpose of this paper is to identify grinding process variables that affect the quality of bearings by using statistical principles in the field of machine tools. In addition, an evaluation of the quality results of the finished parts under different combinations of process variables is assessed. This paper intends to establish the foundations to predict the quality of the products through the analysis of self-induced vibrations during the contact between the grinding wheel and the parts. To achieve this goal, the overall self-induced vibration readings under different combinations of process variables are analysed using statistical tools. The analysis of data and design of experiments follows a classical approach, considering all potential interactions between variables. The analysis of data is conducted through analysis of variance (ANOVA) for data sets that meet normality and homoscedasticity criteria. This paper utilizes different statistical tools to support the conclusions such as chi squared, Shapiro-Wilks, symmetry, Kurtosis, Cochran, Hartlett, and Hartley and Krushal-Wallis. The analysis presented is the starting point to extend the use of predictive techniques (vibration analysis) for quality control. This paper demonstrates the existence

  12. Additional renal arteries: incidence and morphometry.

    PubMed

    Satyapal, K S; Haffejee, A A; Singh, B; Ramsaroop, L; Robbs, J V; Kalideen, J M

    2001-01-01

    Advances in surgical and uro-radiological techniques dictate a reappraisal and definition of renal arterial variations. This retrospective study aimed at establishing the incidence of additional renal arteries. Two subsets were analysed viz.: a) Clinical series--130 renal angiograms performed on renal transplant donors, 32 cadaver kidneys used in renal transplantation b) Cadaveric series--74 en-bloc morphologically normal kidney pairs. The sex and race distribution was: males 140, females 96; African 84, Indian 91, White 43 and "Coloured" 18, respectively. Incidence of first and second additional arteries were respectively, 23.2% (R: 18.6%; L: 27.6%) and 4.5% (R: 4.7%; L: 4.4%). Additional arteries occurred more frequently on the left (L: 32.0%; R: 23.3%). The incidence bilaterally was 10.2% (first additional arteries, only). The sex and race incidence (first and second additional) was: males, 28.0%, 5.1%; females, 16.4%, 3.8% and African 31.1%, 5.4%; Indian 13.5%, 4.5%; White 30.9%, 4.4% and "Coloured" 18.5%, 0%; respectively. Significant differences in the incidence of first additional arteries were noted between sex and race. The morphometry of additional renal arteries were lengths (cm) of first and second additional renal arteries: 4.5 and 3.8 (right), 4.9 and 3.7 (left); diameters: 0.4 and 0.3 (right), 0.3 and 0.3 (left). Detailed morphometry of sex and race were also recorded. No statistically significant differences were noted. Our results of the incidence of additional renal arteries of 27.7% compared favourably to that reported in the literature (weighted mean 28.1%). The study is unique in recording detailed morphometry of these vessels. Careful techniques in the identification of this anatomical variation is important since it impacts on renal transplantation surgery, vascular operations for renal artery stenosis, reno-vascular hypertension, Takayasu's disease, renal trauma and uro-radiological procedures.

  13. Developments in Statistical Education.

    ERIC Educational Resources Information Center

    Kapadia, Ramesh

    1980-01-01

    The current status of statistics education at the secondary level is reviewed, with particular attention focused on the various instructional programs in England. A description and preliminary evaluation of the Schools Council Project on Statistical Education is included. (MP)

  14. Mathematical and statistical analysis

    NASA Technical Reports Server (NTRS)

    Houston, A. Glen

    1988-01-01

    The goal of the mathematical and statistical analysis component of RICIS is to research, develop, and evaluate mathematical and statistical techniques for aerospace technology applications. Specific research areas of interest include modeling, simulation, experiment design, reliability assessment, and numerical analysis.

  15. Finding Statistical Data.

    ERIC Educational Resources Information Center

    Bopp, Richard E.; Van Der Laan, Sharon J.

    1985-01-01

    Presents a search strategy for locating time-series or cross-sectional statistical data in published sources which was designed for undergraduate students who require 30 units of data for five separate variables in a statistical model. Instructional context and the broader applicability of the search strategy for general statistical research is…

  16. Avoiding Statistical Mistakes

    ERIC Educational Resources Information Center

    Strasser, Nora

    2007-01-01

    Avoiding statistical mistakes is important for educators at all levels. Basic concepts will help you to avoid making mistakes using statistics and to look at data with a critical eye. Statistical data is used at educational institutions for many purposes. It can be used to support budget requests, changes in educational philosophy, changes to…

  17. Ethics in Statistics

    ERIC Educational Resources Information Center

    Lenard, Christopher; McCarthy, Sally; Mills, Terence

    2014-01-01

    There are many different aspects of statistics. Statistics involves mathematics, computing, and applications to almost every field of endeavour. Each aspect provides an opportunity to spark someone's interest in the subject. In this paper we discuss some ethical aspects of statistics, and describe how an introduction to ethics has been…

  18. Statistical quality management

    NASA Astrophysics Data System (ADS)

    Vanderlaan, Paul

    1992-10-01

    Some aspects of statistical quality management are discussed. Quality has to be defined as a concrete, measurable quantity. The concepts of Total Quality Management (TQM), Statistical Process Control (SPC), and inspection are explained. In most cases SPC is better than inspection. It can be concluded that statistics has great possibilities in the field of TQM.

  19. The Economic Cost of Homosexuality: Multilevel Analyses

    ERIC Educational Resources Information Center

    Baumle, Amanda K.; Poston, Dudley, Jr.

    2011-01-01

    This article builds on earlier studies that have examined "the economic cost of homosexuality," by using data from the 2000 U.S. Census and by employing multilevel analyses. Our findings indicate that partnered gay men experience a 12.5 percent earnings penalty compared to married heterosexual men, and a statistically insignificant earnings…

  20. Septic tank additive impacts on microbial populations.

    PubMed

    Pradhan, S; Hoover, M T; Clark, G H; Gumpertz, M; Wollum, A G; Cobb, C; Strock, J

    2008-01-01

    Environmental health specialists, other onsite wastewater professionals, scientists, and homeowners have questioned the effectiveness of septic tank additives. This paper describes an independent, third-party, field scale, research study of the effects of three liquid bacterial septic tank additives and a control (no additive) on septic tank microbial populations. Microbial populations were measured quarterly in a field study for 12 months in 48 full-size, functioning septic tanks. Bacterial populations in the 48 septic tanks were statistically analyzed with a mixed linear model. Additive effects were assessed for three septic tank maintenance levels (low, intermediate, and high). Dunnett's t-test for tank bacteria (alpha = .05) indicated that none of the treatments were significantly different, overall, from the control at the statistical level tested. In addition, the additives had no significant effects on septic tank bacterial populations at any of the septic tank maintenance levels. Additional controlled, field-based research iswarranted, however, to address additional additives and experimental conditions.

  1. A Generative Statistical Algorithm for Automatic Detection of Complex Postures

    PubMed Central

    Amit, Yali; Biron, David

    2015-01-01

    This paper presents a method for automated detection of complex (non-self-avoiding) postures of the nematode Caenorhabditis elegans and its application to analyses of locomotion defects. Our approach is based on progressively detailed statistical models that enable detection of the head and the body even in cases of severe coilers, where data from traditional trackers is limited. We restrict the input available to the algorithm to a single digitized frame, such that manual initialization is not required and the detection problem becomes embarrassingly parallel. Consequently, the proposed algorithm does not propagate detection errors and naturally integrates in a “big data” workflow used for large-scale analyses. Using this framework, we analyzed the dynamics of postures and locomotion of wild-type animals and mutants that exhibit severe coiling phenotypes. Our approach can readily be extended to additional automated tracking tasks such as tracking pairs of animals (e.g., for mating assays) or different species. PMID:26439258

  2. Investigation of the freely available easy-to-use software 'EZR' for medical statistics.

    PubMed

    Kanda, Y

    2013-03-01

    Although there are many commercially available statistical software packages, only a few implement a competing risk analysis or a proportional hazards regression model with time-dependent covariates, which are necessary in studies on hematopoietic SCT. In addition, most packages are not clinician friendly, as they require that commands be written based on statistical languages. This report describes the statistical software 'EZR' (Easy R), which is based on R and R commander. EZR enables the application of statistical functions that are frequently used in clinical studies, such as survival analyses, including competing risk analyses and the use of time-dependent covariates, receiver operating characteristics analyses, meta-analyses, sample size calculation and so on, by point-and-click access. EZR is freely available on our website (http://www.jichi.ac.jp/saitama-sct/SaitamaHP.files/statmed.html) and runs on both Windows (Microsoft Corporation, USA) and Mac OS X (Apple, USA). This report provides instructions for the installation and operation of EZR. PMID:23208313

  3. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and... by conducting additional analyses using any standard engineering economics method such as sensitivity... energy or water system alternative....

  4. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and... by conducting additional analyses using any standard engineering economics method such as sensitivity... energy or water system alternative....

  5. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  6. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  7. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  8. Statistically determined nickel cadmium performance relationships

    NASA Technical Reports Server (NTRS)

    Gross, Sidney

    1987-01-01

    A statistical analysis was performed on sealed nickel cadmium cell manufacturing data and cell matching data. The cells subjected to the analysis were 30 Ah sealed Ni/Cd cells, made by General Electric. A total of 213 data parameters was investigated, including such information as plate thickness, amount of electrolyte added, weight of active material, positive and negative capacity, and charge-discharge behavior. Statistical analyses were made to determine possible correlations between test events. The data show many departures from normal distribution. Product consistency from one lot to another is an important attribute for aerospace applications. It is clear from these examples that there are some significant differences between lots. Statistical analyses are seen to be an excellent way to spot those differences. Also, it is now proven beyond doubt that battery testing is one of the leading causes of statistics.

  9. Statistical criteria for characterizing irradiance time series.

    SciTech Connect

    Stein, Joshua S.; Ellis, Abraham; Hansen, Clifford W.

    2010-10-01

    We propose and examine several statistical criteria for characterizing time series of solar irradiance. Time series of irradiance are used in analyses that seek to quantify the performance of photovoltaic (PV) power systems over time. Time series of irradiance are either measured or are simulated using models. Simulations of irradiance are often calibrated to or generated from statistics for observed irradiance and simulations are validated by comparing the simulation output to the observed irradiance. Criteria used in this comparison should derive from the context of the analyses in which the simulated irradiance is to be used. We examine three statistics that characterize time series and their use as criteria for comparing time series. We demonstrate these statistics using observed irradiance data recorded in August 2007 in Las Vegas, Nevada, and in June 2009 in Albuquerque, New Mexico.

  10. Statistical Modeling and Analysis of Laser-Evoked Potentials of Electrocorticogram Recordings from Awake Humans

    PubMed Central

    Chen, Zhe; Ohara, Shinji; Cao, Jianting; Vialatte, François; Lenz, Fred A.; Cichocki, Andrzej

    2007-01-01

    This article is devoted to statistical modeling and analysis of electrocorticogram (ECoG) signals induced by painful cutaneous laser stimuli, which were recorded from implanted electrodes in awake humans. Specifically, with statistical tools of factor analysis and independent component analysis, the pain-induced laser-evoked potentials (LEPs) were extracted and investigated under different controlled conditions. With the help of wavelet analysis, quantitative and qualitative analyses were conducted regarding the LEPs' attributes of power, amplitude, and latency, in both averaging and single-trial experiments. Statistical hypothesis tests were also applied in various experimental setups. Experimental results reported herein also confirm previous findings in the neurophysiology literature. In addition, single-trial analysis has also revealed many new observations that might be interesting to the neuroscientists or clinical neurophysiologists. These promising results show convincing validation that advanced signal processing and statistical analysis may open new avenues for future studies of such ECoG or other relevant biomedical recordings. PMID:18369410

  11. Statistical analysis of temperature extremes in long-time series from Uppsala

    NASA Astrophysics Data System (ADS)

    Rydén, Jesper

    2011-08-01

    Temperature records in Uppsala, Sweden, during the period 1840-2001, are analysed. More precisely, yearly maxima and minima are studied in order to investigate possible trends. Extreme-value distributions are fitted, and a nonstationary model is introduced by allowing for a time-dependent location parameter. Comparisons are made with an estimated trend for mean temperature. In addition, a Mann-Kendall test is performed in order to investigate a present trend. The results obtained from the statistical models agree with those found earlier by descriptive statistics, in particular an increasing trend for the coldest days of the year.

  12. Projections of Education Statistics to 2007.

    ERIC Educational Resources Information Center

    Gerald, Debra E.; Hussar, William J.

    "Projections of Education Statistics to 2007" is the 26th report in a series begun in 1964 that revises projections annually to show statistics on elementary and secondary schools and institutions of higher education at the national level. Included are projections for enrollment, graduates, classroom teachers, and expenditures. In addition, this…

  13. Florida Library Directory with Statistics, 1998.

    ERIC Educational Resources Information Center

    Florida Dept. of State, Tallahassee. Div. of Library and Information Services.

    This 49th annual Florida Library directory with statistics edition includes listings for over 1,000 libraries of all types in Florida, with contact named, phone numbers, addresses, and e-mail and web addresses. In addition, there is a section of library statistics, showing data on the use, resources, and financial condition of Florida's libraries.…

  14. Exploring Correlation Coefficients with Golf Statistics

    ERIC Educational Resources Information Center

    Quinn, Robert J

    2006-01-01

    This article explores the relationships between several pairs of statistics kept on professional golfers on the PGA tour. Specifically, two measures related to the player's ability to drive the ball are compared as are two measures related to the player's ability to putt. An additional analysis is made between one statistic related to putting and…

  15. Leadership statistics in random structures

    NASA Astrophysics Data System (ADS)

    Ben-Naim, E.; Krapivsky, P. L.

    2004-01-01

    The largest component ("the leader") in evolving random structures often exhibits universal statistical properties. This phenomenon is demonstrated analytically for two ubiquitous structures: random trees and random graphs. In both cases, lead changes are rare as the average number of lead changes increases quadratically with logarithm of the system size. As a function of time, the number of lead changes is self-similar. Additionally, the probability that no lead change ever occurs decays exponentially with the average number of lead changes.

  16. Statistical Physics of Colloidal Dispersions.

    NASA Astrophysics Data System (ADS)

    Canessa, E.

    changes of the depletion attraction with free polymer concentration. Chapter IV deals with the contributions of pairwise additive and volume dependent forces to the free energy of charge stabilized colloidal dispersions. To a first approximation the extra volume dependent contributions due to the chemical equilibrium and counterion-macroion coupling are treated in a one-component plasma approach. Added salt is treated as an ionized gas within the Debye-Huckel theory of electrolytes. In order to set this approach on a quantitative basis the existence of an equilibrium lattice with a small shear modulus is examined. Structural phase transitions in these systems are also analysed theoretically as a function of added electrolyte.

  17. XMM-Newton publication statistics

    NASA Astrophysics Data System (ADS)

    Ness, J.-U.; Parmar, A. N.; Valencic, L. A.; Smith, R.; Loiseau, N.; Salama, A.; Ehle, M.; Schartel, N.

    2014-02-01

    We assessed the scientific productivity of XMM-Newton by examining XMM-Newton publications and data usage statistics. We analyse 3272 refereed papers, published until the end of 2012, that directly use XMM-Newton data. The SAO/NASA Astrophysics Data System (ADS) was used to provide additional information on each paper including the number of citations. For each paper, the XMM-Newton observation identifiers and instruments used to provide the scientific results were determined. The identifiers were used to access the XMM-{Newton} Science Archive (XSA) to provide detailed information on the observations themselves and on the original proposals. The information obtained from these sources was then combined to allow the scientific productivity of the mission to be assessed. Since around three years after the launch of XMM-Newton there have been around 300 refereed papers per year that directly use XMM-Newton data. After more than 13 years in operation, this rate shows no evidence that it is decreasing. Since 2002, around 100 scientists per year become lead authors for the first time on a refereed paper which directly uses XMM-Newton data. Each refereed XMM-Newton paper receives around four citations per year in the first few years with a long-term citation rate of three citations per year, more than five years after publication. About half of the articles citing XMM-Newton articles are not primarily X-ray observational papers. The distribution of elapsed time between observations taken under the Guest Observer programme and first article peaks at 2 years with a possible second peak at 3.25 years. Observations taken under the Target of Opportunity programme are published significantly faster, after one year on average. The fraction of science time taken until the end of 2009 that has been used in at least one article is {˜ 90} %. Most observations were used more than once, yielding on average a factor of two in usage on available observing time per year. About 20 % of

  18. Characterizing Year 11 Students' Evaluation of a Statistical Process

    ERIC Educational Resources Information Center

    Pfannkuch, Maxine

    2005-01-01

    Evaluating the statistical process is considered a higher order skill and has received little emphasis in instruction. This study analyses thirty 15-year-old students' responses to two statistics assessment tasks, which required evaluation of a statistical investigation. The SOLO taxonomy is used as a framework to develop a hierarchy of responses.…

  19. Chemists, Access, Statistics

    NASA Astrophysics Data System (ADS)

    Holmes, Jon L.

    2000-06-01

    IP-number access. Current subscriptions can be upgraded to IP-number access at little additional cost. We are pleased to be able to offer to institutions and libraries this convenient mode of access to subscriber only resources at JCE Online. JCE Online Usage Statistics We are continually amazed by the activity at JCE Online. So far, the year 2000 has shown a marked increase. Given the phenomenal overall growth of the Internet, perhaps our surprise is not warranted. However, during the months of January and February 2000, over 38,000 visitors requested over 275,000 pages. This is a monthly increase of over 33% from the October-December 1999 levels. It is good to know that people are visiting, but we would very much like to know what you would most like to see at JCE Online. Please send your suggestions to JCEOnline@chem.wisc.edu. For those who are interested, JCE Online year-to-date statistics are available. Biographical Snapshots of Famous Chemists: Mission Statement Feature Editor: Barbara Burke Chemistry Department, California State Polytechnic University-Pomona, Pomona, CA 91768 phone: 909/869-3664 fax: 909/869-4616 email: baburke@csupomona.edu The primary goal of this JCE Internet column is to provide information about chemists who have made important contributions to chemistry. For each chemist, there is a short biographical "snapshot" that provides basic information about the person's chemical work, gender, ethnicity, and cultural background. Each snapshot includes links to related websites and to a biobibliographic database. The database provides references for the individual and can be searched through key words listed at the end of each snapshot. All students, not just science majors, need to understand science as it really is: an exciting, challenging, human, and creative way of learning about our natural world. Investigating the life experiences of chemists can provide a means for students to gain a more realistic view of chemistry. In addition students

  20. Online use statistics.

    PubMed

    Tannery, Nancy Hrinya; Silverman, Deborah L; Epstein, Barbara A

    2002-01-01

    Online use statistics can provide libraries with a tool to be used when developing an online collection of resources. Statistics can provide information on overall use of a collection, individual print and electronic journal use, and collection use by specific user populations. They can also be used to determine the number of user licenses to purchase. This paper focuses on the issue of use statistics made available for one collection of online resources.

  1. Statistical distribution sampling

    NASA Technical Reports Server (NTRS)

    Johnson, E. S.

    1975-01-01

    Determining the distribution of statistics by sampling was investigated. Characteristic functions, the quadratic regression problem, and the differential equations for the characteristic functions are analyzed.

  2. Selected Streamflow Statistics and Regression Equations for Predicting Statistics at Stream Locations in Monroe County, Pennsylvania

    USGS Publications Warehouse

    Thompson, Ronald E.; Hoffman, Scott A.

    2006-01-01

    A suite of 28 streamflow statistics, ranging from extreme low to high flows, was computed for 17 continuous-record streamflow-gaging stations and predicted for 20 partial-record stations in Monroe County and contiguous counties in north-eastern Pennsylvania. The predicted statistics for the partial-record stations were based on regression analyses relating inter-mittent flow measurements made at the partial-record stations indexed to concurrent daily mean flows at continuous-record stations during base-flow conditions. The same statistics also were predicted for 134 ungaged stream locations in Monroe County on the basis of regression analyses relating the statistics to GIS-determined basin characteristics for the continuous-record station drainage areas. The prediction methodology for developing the regression equations used to estimate statistics was developed for estimating low-flow frequencies. This study and a companion study found that the methodology also has application potential for predicting intermediate- and high-flow statistics. The statistics included mean monthly flows, mean annual flow, 7-day low flows for three recurrence intervals, nine flow durations, mean annual base flow, and annual mean base flows for two recurrence intervals. Low standard errors of prediction and high coefficients of determination (R2) indicated good results in using the regression equations to predict the statistics. Regression equations for the larger flow statistics tended to have lower standard errors of prediction and higher coefficients of determination (R2) than equations for the smaller flow statistics. The report discusses the methodologies used in determining the statistics and the limitations of the statistics and the equations used to predict the statistics. Caution is indicated in using the predicted statistics for small drainage area situations. Study results constitute input needed by water-resource managers in Monroe County for planning purposes and evaluation

  3. NOAA's National Snow Analyses

    NASA Astrophysics Data System (ADS)

    Carroll, T. R.; Cline, D. W.; Olheiser, C. M.; Rost, A. A.; Nilsson, A. O.; Fall, G. M.; Li, L.; Bovitz, C. T.

    2005-12-01

    NOAA's National Operational Hydrologic Remote Sensing Center (NOHRSC) routinely ingests all of the electronically available, real-time, ground-based, snow data; airborne snow water equivalent data; satellite areal extent of snow cover information; and numerical weather prediction (NWP) model forcings for the coterminous U.S. The NWP model forcings are physically downscaled from their native 13 km2 spatial resolution to a 1 km2 resolution for the CONUS. The downscaled NWP forcings drive an energy-and-mass-balance snow accumulation and ablation model at a 1 km2 spatial resolution and at a 1 hour temporal resolution for the country. The ground-based, airborne, and satellite snow observations are assimilated into the snow model's simulated state variables using a Newtonian nudging technique. The principle advantages of the assimilation technique are: (1) approximate balance is maintained in the snow model, (2) physical processes are easily accommodated in the model, and (3) asynoptic data are incorporated at the appropriate times. The snow model is reinitialized with the assimilated snow observations to generate a variety of snow products that combine to form NOAA's NOHRSC National Snow Analyses (NSA). The NOHRSC NSA incorporate all of the available information necessary and available to produce a "best estimate" of real-time snow cover conditions at 1 km2 spatial resolution and 1 hour temporal resolution for the country. The NOHRSC NSA consist of a variety of daily, operational, products that characterize real-time snowpack conditions including: snow water equivalent, snow depth, surface and internal snowpack temperatures, surface and blowing snow sublimation, and snowmelt for the CONUS. The products are generated and distributed in a variety of formats including: interactive maps, time-series, alphanumeric products (e.g., mean areal snow water equivalent on a hydrologic basin-by-basin basis), text and map discussions, map animations, and quantitative gridded products

  4. Multidimensional Visual Statistical Learning

    ERIC Educational Resources Information Center

    Turk-Browne, Nicholas B.; Isola, Phillip J.; Scholl, Brian J.; Treat, Teresa A.

    2008-01-01

    Recent studies of visual statistical learning (VSL) have demonstrated that statistical regularities in sequences of visual stimuli can be automatically extracted, even without intent or awareness. Despite much work on this topic, however, several fundamental questions remain about the nature of VSL. In particular, previous experiments have not…

  5. Statistics and Measurements

    PubMed Central

    Croarkin, M. Carroll

    2001-01-01

    For more than 50 years, the Statistical Engineering Division (SED) has been instrumental in the success of a broad spectrum of metrology projects at NBS/NIST. This paper highlights fundamental contributions of NBS/NIST statisticians to statistics and to measurement science and technology. Published methods developed by SED staff, especially during the early years, endure as cornerstones of statistics not only in metrology and standards applications, but as data-analytic resources used across all disciplines. The history of statistics at NBS/NIST began with the formation of what is now the SED. Examples from the first five decades of the SED illustrate the critical role of the division in the successful resolution of a few of the highly visible, and sometimes controversial, statistical studies of national importance. A review of the history of major early publications of the division on statistical methods, design of experiments, and error analysis and uncertainty is followed by a survey of several thematic areas. The accompanying examples illustrate the importance of SED in the history of statistics, measurements and standards: calibration and measurement assurance, interlaboratory tests, development of measurement methods, Standard Reference Materials, statistical computing, and dissemination of measurement technology. A brief look forward sketches the expanding opportunity and demand for SED statisticians created by current trends in research and development at NIST. PMID:27500023

  6. Explorations in Statistics: Regression

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2011-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive connection.…

  7. On Statistical Testing.

    ERIC Educational Resources Information Center

    Huberty, Carl J.

    An approach to statistical testing, which combines Neyman-Pearson hypothesis testing and Fisher significance testing, is recommended. The use of P-values in this approach is discussed in some detail. The author also discusses some problems which are often found in introductory statistics textbooks. The problems involve the definitions of…

  8. Reform in Statistical Education

    ERIC Educational Resources Information Center

    Huck, Schuyler W.

    2007-01-01

    Two questions are considered in this article: (a) What should professionals in school psychology do in an effort to stay current with developments in applied statistics? (b) What should they do with their existing knowledge to move from surface understanding of statistics to deep understanding? Written for school psychologists who have completed…

  9. Demonstrating Poisson Statistics.

    ERIC Educational Resources Information Center

    Vetterling, William T.

    1980-01-01

    Describes an apparatus that offers a very lucid demonstration of Poisson statistics as applied to electrical currents, and the manner in which such statistics account for shot noise when applied to macroscopic currents. The experiment described is intended for undergraduate physics students. (HM)

  10. Statistical Summaries: Public Institutions.

    ERIC Educational Resources Information Center

    Virginia State Council of Higher Education, Richmond.

    This document, presents a statistical portrait of the Virginia's 17 public higher education institutions. Data provided include: enrollment figures (broken down in categories such as sex, residency, full- and part-time status, residence, ethnicity, age, and level of postsecondary education); FTE figures; admissions statistics (such as number…

  11. Statistics 101 for Radiologists.

    PubMed

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. PMID:26466186

  12. Explorations in Statistics: Power

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2010-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fifth installment of "Explorations in Statistics" revisits power, a concept fundamental to the test of a null hypothesis. Power is the probability that we reject the null hypothesis when it is false. Four things affect…

  13. Applied Statistics with SPSS

    ERIC Educational Resources Information Center

    Huizingh, Eelko K. R. E.

    2007-01-01

    Accessibly written and easy to use, "Applied Statistics Using SPSS" is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. What is unique about Eelko Huizingh's approach is that this book is based around the needs of undergraduate students embarking on their own research project, and its self-help style is designed to…

  14. Application Statistics 1987.

    ERIC Educational Resources Information Center

    Council of Ontario Universities, Toronto.

    Summary statistics on application and registration patterns of applicants wishing to pursue full-time study in first-year places in Ontario universities (for the fall of 1987) are given. Data on registrations were received indirectly from the universities as part of their annual submission of USIS/UAR enrollment data to Statistics Canada and MCU.…

  15. Introduction to Statistical Physics

    NASA Astrophysics Data System (ADS)

    Casquilho, João Paulo; Ivo Cortez Teixeira, Paulo

    2014-12-01

    Preface; 1. Random walks; 2. Review of thermodynamics; 3. The postulates of statistical physics. Thermodynamic equilibrium; 4. Statistical thermodynamics – developments and applications; 5. The classical ideal gas; 6. The quantum ideal gas; 7. Magnetism; 8. The Ising model; 9. Liquid crystals; 10. Phase transitions and critical phenomena; 11. Irreversible processes; Appendixes; Index.

  16. Deconstructing Statistical Analysis

    ERIC Educational Resources Information Center

    Snell, Joel

    2014-01-01

    Using a very complex statistical analysis and research method for the sake of enhancing the prestige of an article or making a new product or service legitimate needs to be monitored and questioned for accuracy. 1) The more complicated the statistical analysis, and research the fewer the number of learned readers can understand it. This adds a…

  17. Water Quality Statistics

    ERIC Educational Resources Information Center

    Hodgson, Ted; Andersen, Lyle; Robison-Cox, Jim; Jones, Clain

    2004-01-01

    Water quality experiments, especially the use of macroinvertebrates as indicators of water quality, offer an ideal context for connecting statistics and science. In the STAR program for secondary students and teachers, water quality experiments were also used as a context for teaching statistics. In this article, we trace one activity that uses…

  18. Statistics 101 for Radiologists.

    PubMed

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced.

  19. Understanding Undergraduate Statistical Anxiety

    ERIC Educational Resources Information Center

    McKim, Courtney

    2014-01-01

    The purpose of this study was to understand undergraduate students' views of statistics. Results reveal that students with less anxiety have a higher interest in statistics and also believe in their ability to perform well in the course. Also students who have a more positive attitude about the class tend to have a higher belief in their…

  20. Explorations in Statistics: Correlation

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2010-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This sixth installment of "Explorations in Statistics" explores correlation, a familiar technique that estimates the magnitude of a straight-line relationship between two variables. Correlation is meaningful only when the…

  1. Wavelet Analyses and Applications

    ERIC Educational Resources Information Center

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  2. Apollo 14 microbial analyses

    NASA Technical Reports Server (NTRS)

    Taylor, G. R.

    1972-01-01

    Extensive microbiological analyses that were performed on the Apollo 14 prime and backup crewmembers and ancillary personnel are discussed. The crewmembers were subjected to four separate and quite different environments during the 137-day monitoring period. The relation between each of these environments and observed changes in the microflora of each astronaut are presented.

  3. Tsallis statistics and neurodegenerative disorders

    NASA Astrophysics Data System (ADS)

    Iliopoulos, Aggelos C.; Tsolaki, Magdalini; Aifantis, Elias C.

    2016-08-01

    In this paper, we perform statistical analysis of time series deriving from four neurodegenerative disorders, namely epilepsy, amyotrophic lateral sclerosis (ALS), Parkinson's disease (PD), Huntington's disease (HD). The time series are concerned with electroencephalograms (EEGs) of healthy and epileptic states, as well as gait dynamics (in particular stride intervals) of the ALS, PD and HDs. We study data concerning one subject for each neurodegenerative disorder and one healthy control. The analysis is based on Tsallis non-extensive statistical mechanics and in particular on the estimation of Tsallis q-triplet, namely {qstat, qsen, qrel}. The deviation of Tsallis q-triplet from unity indicates non-Gaussian statistics and long-range dependencies for all time series considered. In addition, the results reveal the efficiency of Tsallis statistics in capturing differences in brain dynamics between healthy and epileptic states, as well as differences between ALS, PD, HDs from healthy control subjects. The results indicate that estimations of Tsallis q-indices could be used as possible biomarkers, along with others, for improving classification and prediction of epileptic seizures, as well as for studying the gait complex dynamics of various diseases providing new insights into severity, medications and fall risk, improving therapeutic interventions.

  4. Statistical methods for material characterization and qualification

    SciTech Connect

    Hunn, John D; Kercher, Andrew K

    2005-01-01

    This document describes a suite of statistical methods that can be used to infer lot parameters from the data obtained from inspection/testing of random samples taken from that lot. Some of these methods will be needed to perform the statistical acceptance tests required by the Advanced Gas Reactor Fuel Development and Qualification (AGR) Program. Special focus has been placed on proper interpretation of acceptance criteria and unambiguous methods of reporting the statistical results. In addition, modified statistical methods are described that can provide valuable measures of quality for different lots of material. This document has been written for use as a reference and a guide for performing these statistical calculations. Examples of each method are provided. Uncertainty analysis (e.g., measurement uncertainty due to instrumental bias) is not included in this document, but should be considered when reporting statistical results.

  5. Statistical Methods for Material Characterization and Qualification

    SciTech Connect

    Kercher, A.K.

    2005-04-01

    This document describes a suite of statistical methods that can be used to infer lot parameters from the data obtained from inspection/testing of random samples taken from that lot. Some of these methods will be needed to perform the statistical acceptance tests required by the Advanced Gas Reactor Fuel Development and Qualification (AGR) Program. Special focus has been placed on proper interpretation of acceptance criteria and unambiguous methods of reporting the statistical results. In addition, modified statistical methods are described that can provide valuable measures of quality for different lots of material. This document has been written for use as a reference and a guide for performing these statistical calculations. Examples of each method are provided. Uncertainty analysis (e.g., measurement uncertainty due to instrumental bias) is not included in this document, but should be considered when reporting statistical results.

  6. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.

  7. Statistical Treatment of Looking-Time Data

    ERIC Educational Resources Information Center

    Csibra, Gergely; Hernik, Mikolaj; Mascaro, Olivier; Tatone, Denis; Lengyel, Máté

    2016-01-01

    Looking times (LTs) are frequently measured in empirical research on infant cognition. We analyzed the statistical distribution of LTs across participants to develop recommendations for their treatment in infancy research. Our analyses focused on a common within-subject experimental design, in which longer looking to novel or unexpected stimuli is…

  8. The Variability Debate: More Statistics, More Linguistics

    ERIC Educational Resources Information Center

    Drai, Dan; Grodzinsky, Yosef

    2006-01-01

    We respond to critical comments and consider alternative statistical and syntactic analyses of our target paper which analyzed comprehension scores of Broca's aphasic patients from multiple sentence types in many languages, and showed that Movement but not Complexity or Mood are factors in the receptive deficit of these patients. Specifically, we…

  9. LED champing: statistically blessed?

    PubMed

    Wang, Zhuo

    2015-06-10

    LED champing (smart mixing of individual LEDs to match the desired color and lumens) and color mixing strategies have been widely used to maintain the color consistency of light engines. Light engines with champed LEDs can easily achieve the color consistency of a couple MacAdam steps with widely distributed LEDs to begin with. From a statistical point of view, the distributions for the color coordinates and the flux after champing are studied. The related statistical parameters are derived, which facilitate process improvements such as Six Sigma and are instrumental to statistical quality control for mass productions. PMID:26192863

  10. Statistical Perspectives on Stratospheric Transport

    NASA Technical Reports Server (NTRS)

    Sparling, L. C.

    1999-01-01

    Long-lived tropospheric source gases, such as nitrous oxide, enter the stratosphere through the tropical tropopause, are transported throughout the stratosphere by the Brewer-Dobson circulation, and are photochemically destroyed in the upper stratosphere. These chemical constituents, or "tracers" can be used to track mixing and transport by the stratospheric winds. Much of our understanding about the stratospheric circulation is based on large scale gradients and other spatial features in tracer fields constructed from satellite measurements. The point of view presented in this paper is different, but complementary, in that transport is described in terms of tracer probability distribution functions (PDFs). The PDF is computed from the measurements, and is proportional to the area occupied by tracer values in a given range. The flavor of this paper is tutorial, and the ideas are illustrated with several examples of transport-related phenomena, annotated with remarks that summarize the main point or suggest new directions. One example shows how the multimodal shape of the PDF gives information about the different branches of the circulation. Another example shows how the statistics of fluctuations from the most probable tracer value give insight into mixing between different regions of the atmosphere. Also included is an analysis of the time-dependence of the PDF during the onset and decline of the winter circulation, and a study of how "bursts" in the circulation are reflected in transient periods of rapid evolution of the PDF. The dependence of the statistics on location and time are also shown to be important for practical problems related to statistical robustness and satellite sampling. The examples illustrate how physically-based statistical analysis can shed some light on aspects of stratospheric transport that may not be obvious or quantifiable with other types of analyses. An important motivation for the work presented here is the need for synthesis of the

  11. The uncertain hockey stick: a statistical perspective on the reconstruction of past temperatures

    NASA Astrophysics Data System (ADS)

    Nychka, Douglas

    2007-03-01

    A reconstruction of past temperatures based on proxies is inherently a statistical process and a deliberate statistical model for the reconstruction can also provide companion measures of uncertainty. This view is often missed in the heat of debating the merits of different analyses and interpretations of paleoclimate data. Although statistical error is acknowledged to be just one component of the total uncertainty in a reconstruction, it can provide a valuable yardstick for comparing different reconstructions or drawing inferences about features. In this talk we suggest a framework where the reconstruction is expressed as a conditional distribution of the temperatures given the proxies. Random draws from this distribution provide an ensemble of reconstructions where the spread among ensemble members is a valid statistical measure of uncertainty. This approach is illustrated for Northern Hemisphere temperatures and the multi-proxy data used by Mann, Bradley and Hughes (1999). Here we explore the scope of the statistical assumptions needed to carry through a rigorous analysis and use Monte Carlo sampling to determine the uncertainty in maxima or other complicated statistics in the reconstructed series. The principles behind this simple example for the Northern Hemisphere can be extended to regional reconstructions, incorporation of additional types proxies and the use of statistics from numerical models.

  12. Statistics of the sagas

    NASA Astrophysics Data System (ADS)

    Richfield, Jon; bookfeller

    2016-07-01

    In reply to Ralph Kenna and Pádraig Mac Carron's feature article “Maths meets myths” in which they describe how they are using techniques from statistical physics to characterize the societies depicted in ancient Icelandic sagas.

  13. Brain Tumor Statistics

    MedlinePlus

    ... facts and statistics here include brain and central nervous system tumors (including spinal cord, pituitary and pineal gland ... U.S. living with a primary brain and central nervous system tumor. This year, nearly 17,000 people will ...

  14. Elements of Statistics

    NASA Astrophysics Data System (ADS)

    Grégoire, G.

    2016-05-01

    This chapter is devoted to two objectives. The first one is to answer the request expressed by attendees of the first Astrostatistics School (Annecy, October 2013) to be provided with an elementary vademecum of statistics that would facilitate understanding of the given courses. In this spirit we recall very basic notions, that is definitions and properties that we think sufficient to benefit from courses given in the Astrostatistical School. Thus we give briefly definitions and elementary properties on random variables and vectors, distributions, estimation and tests, maximum likelihood methodology. We intend to present basic ideas in a hopefully comprehensible way. We do not try to give a rigorous presentation, and due to the place devoted to this chapter, can cover only a rather limited field of statistics. The second aim is to focus on some statistical tools that are useful in classification: basic introduction to Bayesian statistics, maximum likelihood methodology, Gaussian vectors and Gaussian mixture models.

  15. Plague Maps and Statistics

    MedlinePlus

    ... and Statistics Recommend on Facebook Tweet Share Compartir Plague in the United States Plague was first introduced ... per year in the United States: 1900-2012. Plague Worldwide Plague epidemics have occurred in Africa, Asia, ...

  16. Cooperative Learning in Statistics.

    ERIC Educational Resources Information Center

    Keeler, Carolyn M.; And Others

    1994-01-01

    Formal use of cooperative learning techniques proved effective in improving student performance and retention in a freshman level statistics course. Lectures interspersed with group activities proved effective in increasing conceptual understanding and overall class performance. (11 references) (Author)

  17. Purposeful Statistical Investigations

    ERIC Educational Resources Information Center

    Day, Lorraine

    2014-01-01

    Lorraine Day provides us with a great range of statistical investigations using various resources such as maths300 and TinkerPlots. Each of the investigations link mathematics to students' lives and provide engaging and meaningful contexts for mathematical inquiry.

  18. Tuberculosis Data and Statistics

    MedlinePlus

    ... Organization Chart Advisory Groups Federal TB Task Force Data and Statistics Language: English Español (Spanish) Recommend on ... United States publication. PDF [6 MB] Interactive TB Data Tool Online Tuberculosis Information System (OTIS) OTIS is ...

  19. Understanding Solar Flare Statistics

    NASA Astrophysics Data System (ADS)

    Wheatland, M. S.

    2005-12-01

    A review is presented of work aimed at understanding solar flare statistics, with emphasis on the well known flare power-law size distribution. Although avalanche models are perhaps the favoured model to describe flare statistics, their physical basis is unclear, and they are divorced from developing ideas in large-scale reconnection theory. An alternative model, aimed at reconciling large-scale reconnection models with solar flare statistics, is revisited. The solar flare waiting-time distribution has also attracted recent attention. Observed waiting-time distributions are described, together with what they might tell us about the flare phenomenon. Finally, a practical application of flare statistics to flare prediction is described in detail, including the results of a year of automated (web-based) predictions from the method.

  20. Latest statistics on cardiovascular disease in Australia.

    PubMed

    Waters, Anne-Marie; Trinh, Lany; Chau, Theresa; Bourchier, Michael; Moon, Lynelle

    2013-06-01

    The results presented herein summarize the most up-to-date cardiovascular statistics available at this time in Australia. The analysis presented here is based on and extends results published in two Australian Institute of Health and Welfare (AIHW) reports, namely Cardiovascular disease: Australian facts 2011 and the cardiovascular disease (CVD) section of Australia's Health 2012. Despite significant improvements in the cardiovascular health of Australians in recent decades, CVD continues to impose a heavy burden on Australians in terms of illness, disability and premature death. Direct health care expenditure for CVD exceeds that for any other disease group. The most recent national data have been analysed to describe patterns and trends in CVD hospitalization and death rates, with additional analysis by Indigenous status, remoteness and socioeconomic group. The incidence of and case-fatality from major coronary events has also been examined. Although CVD death rates have declined steadily in Australia since the late 1960s, CVD still accounts for a larger proportion of deaths (33% in 2009) than any other disease group. Worryingly, the rate at which the coronary heart disease death rate has been falling in recent years has slowed in younger (35-54 years) age groups. Between 1998-99 and 2009-10, the overall rate of hospitalizations for CVD fell by 13%, with declines observed for most major CVDs. In conclusion, CVD disease remains a significant health problem in Australia despite decreasing death and hospitalization rates. PMID:23517328

  1. Guidelines for Meta-Analyses of Counseling Psychology Research

    ERIC Educational Resources Information Center

    Quintana, Stephen M.; Minami, Takuya

    2006-01-01

    This article conceptually describes the steps in conducting quantitative meta-analyses of counseling psychology research with minimal reliance on statistical formulas. The authors identify sources that describe necessary statistical formula for various meta-analytic calculations and describe recent developments in meta-analytic techniques. The…

  2. Statistical process control

    SciTech Connect

    Oakland, J.S.

    1986-01-01

    Addressing the increasing importance for firms to have a thorough knowledge of statistically based quality control procedures, this book presents the fundamentals of statistical process control (SPC) in a non-mathematical, practical way. It provides real-life examples and data drawn from a wide variety of industries. The foundations of good quality management and process control, and control of conformance and consistency during production are given. Offers clear guidance to those who wish to understand and implement modern SPC techniques.

  3. Publication bias in dermatology systematic reviews and meta-analyses.

    PubMed

    Atakpo, Paul; Vassar, Matt

    2016-05-01

    Systematic reviews and meta-analyses in dermatology provide high-level evidence for clinicians and policy makers that influence clinical decision making and treatment guidelines. One methodological problem with systematic reviews is the under representation of unpublished studies. This problem is due in part to publication bias. Omission of statistically non-significant data from meta-analyses may result in overestimation of treatment effect sizes which may lead to clinical consequences. Our goal was to assess whether systematic reviewers in dermatology evaluate and report publication bias. Further, we wanted to conduct our own evaluation of publication bias on meta-analyses that failed to do so. Our study considered systematic reviews and meta-analyses from ten dermatology journals from 2006 to 2016. A PubMed search was conducted, and all full-text articles that met our inclusion criteria were retrieved and coded by the primary author. 293 articles were included in our analysis. Additionally, we formally evaluated publication bias in meta-analyses that failed to do so using trim and fill and cumulative meta-analysis by precision methods. Publication bias was mentioned in 107 articles (36.5%) and was formally evaluated in 64 articles (21.8%). Visual inspection of a funnel plot was the most common method of evaluating publication bias. Publication bias was present in 45 articles (15.3%), not present in 57 articles (19.5%) and not determined in 191 articles (65.2%). Using the trim and fill method, 7 meta-analyses (33.33%) showed evidence of publication bias. Although the trim and fill method only found evidence of publication bias in 7 meta-analyses, the cumulative meta-analysis by precision method found evidence of publication bias in 15 meta-analyses (71.4%). Many of the reviews in our study did not mention or evaluate publication bias. Further, of the 42 articles that stated following PRISMA reporting guidelines, 19 (45.2%) evaluated for publication bias. In

  4. Statistical Physics of Fields

    NASA Astrophysics Data System (ADS)

    Kardar, Mehran

    2006-06-01

    While many scientists are familiar with fractals, fewer are familiar with the concepts of scale-invariance and universality which underly the ubiquity of their shapes. These properties may emerge from the collective behaviour of simple fundamental constituents, and are studied using statistical field theories. Based on lectures for a course in statistical mechanics taught by Professor Kardar at Massachusetts Institute of Technology, this textbook demonstrates how such theories are formulated and studied. Perturbation theory, exact solutions, renormalization groups, and other tools are employed to demonstrate the emergence of scale invariance and universality, and the non-equilibrium dynamics of interfaces and directed paths in random media are discussed. Ideal for advanced graduate courses in statistical physics, it contains an integrated set of problems, with solutions to selected problems at the end of the book. A complete set of solutions is available to lecturers on a password protected website at www.cambridge.org/9780521873413. Based on lecture notes from a course on Statistical Mechanics taught by the author at MIT Contains 65 exercises, with solutions to selected problems Features a thorough introduction to the methods of Statistical Field theory Ideal for graduate courses in Statistical Physics

  5. Statistical Physics of Particles

    NASA Astrophysics Data System (ADS)

    Kardar, Mehran

    2006-06-01

    Statistical physics has its origins in attempts to describe the thermal properties of matter in terms of its constituent particles, and has played a fundamental role in the development of quantum mechanics. Based on lectures for a course in statistical mechanics taught by Professor Kardar at Massachusetts Institute of Technology, this textbook introduces the central concepts and tools of statistical physics. It contains a chapter on probability and related issues such as the central limit theorem and information theory, and covers interacting particles, with an extensive description of the van der Waals equation and its derivation by mean field approximation. It also contains an integrated set of problems, with solutions to selected problems at the end of the book. It will be invaluable for graduate and advanced undergraduate courses in statistical physics. A complete set of solutions is available to lecturers on a password protected website at www.cambridge.org/9780521873420. Based on lecture notes from a course on Statistical Mechanics taught by the author at MIT Contains 89 exercises, with solutions to selected problems Contains chapters on probability and interacting particles Ideal for graduate courses in Statistical Mechanics

  6. [Food additives and healthiness].

    PubMed

    Heinonen, Marina

    2014-01-01

    Additives are used for improving food structure or preventing its spoilage, for example. Many substances used as additives are also naturally present in food. The safety of additives is evaluated according to commonly agreed principles. If high concentrations of an additive cause adverse health effects for humans, a limit of acceptable daily intake (ADI) is set for it. An additive is a risk only when ADI is exceeded. The healthiness of food is measured on the basis of nutrient density and scientifically proven effects.

  7. A statistical anomaly indicates symbiotic origins of eukaryotic membranes.

    PubMed

    Bansal, Suneyna; Mittal, Aditya

    2015-04-01

    Compositional analyses of nucleic acids and proteins have shed light on possible origins of living cells. In this work, rigorous compositional analyses of ∼5000 plasma membrane lipid constituents of 273 species in the three life domains (archaea, eubacteria, and eukaryotes) revealed a remarkable statistical paradox, indicating symbiotic origins of eukaryotic cells involving eubacteria. For lipids common to plasma membranes of the three domains, the number of carbon atoms in eubacteria was found to be similar to that in eukaryotes. However, mutually exclusive subsets of same data show exactly the opposite-the number of carbon atoms in lipids of eukaryotes was higher than in eubacteria. This statistical paradox, called Simpson's paradox, was absent for lipids in archaea and for lipids not common to plasma membranes of the three domains. This indicates the presence of interaction(s) and/or association(s) in lipids forming plasma membranes of eubacteria and eukaryotes but not for those in archaea. Further inspection of membrane lipid structures affecting physicochemical properties of plasma membranes provides the first evidence (to our knowledge) on the symbiotic origins of eukaryotic cells based on the "third front" (i.e., lipids) in addition to the growing compositional data from nucleic acids and proteins.

  8. LDEF Satellite Radiation Analyses

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    1996-01-01

    Model calculations and analyses have been carried out to compare with several sets of data (dose, induced radioactivity in various experiment samples and spacecraft components, fission foil measurements, and LET spectra) from passive radiation dosimetry on the Long Duration Exposure Facility (LDEF) satellite, which was recovered after almost six years in space. The calculations and data comparisons are used to estimate the accuracy of current models and methods for predicting the ionizing radiation environment in low earth orbit. The emphasis is on checking the accuracy of trapped proton flux and anisotropy models.

  9. Phylogenomic analyses and improved resolution of Cetartiodactyla.

    PubMed

    Zhou, Xuming; Xu, Shixia; Yang, Yunxia; Zhou, Kaiya; Yang, Guang

    2011-11-01

    The remarkable antiquity, diversity, and significance in the ecology and evolution of Cetartiodactyla have inspired numerous attempts to resolve their phylogenetic relationships. However, previous analyses based on limited samples of nuclear genes or mitochondrial DNA sequences have generated results that were either inconsistent with one another, weakly supported, or highly sensitive to analytical conditions. Here, we present strongly supported results based upon over 1.4 Mb of an aligned DNA sequence matrix from 110 single-copy nuclear protein-coding genes of 21 Cetartiodactyla species, which represent major Cetartiodactyla lineages, and three species of Perissodactyla and Carnivora as outgroups. Phylogenetic analysis of this newly developed genomic sequence data using a codon-based model and recently developed models of the rate autocorrelation resolved the phylogenetic relationships of the major cetartiodactylan lineages and of those lineages with a high degree of confidence. Cetacea was found to nest within Artiodactyla as the sister group of Hippopotamidae, and Tylopoda was corroborated as the sole base clade of Cetartiodactyla. Within Cetacea, the monophyletic status of Odontoceti relative to Mysticeti, the basal position of Physeteroidea in Odontoceti, the non-monophyly of the river dolphins, and the sister relationship between Delphinidae and Monodontidae+Phocoenidae were strongly supported. In particular, the groups of Tursiops (bottlenose dolphins) and Stenella (spotted dolphins) were validated as unnatural groups. Additionally, a very narrow time frame of ∼3 My (million years) was found for the rapid diversification of delphinids in the late Miocene, which made it difficult to resolve the phylogenetic relationships within the Delphinidae, especially for previous studies with limited data sets. The present study provides a statistically well-supported phylogenetic framework of Cetartiodactyla, which represents an important step toward ending some of

  10. Consumption Patterns and Perception Analyses of Hangwa

    PubMed Central

    Kwock, Chang Geun; Lee, Min A; Park, So Hyun

    2012-01-01

    Hangwa is a traditional food, corresponding to the current consumption trend, in need of marketing strategies to extend its consumption. Therefore, the purpose of this study was to analyze consumers’ consumption patterns and perception of Hangwa to increase consumption in the market. A questionnaire was sent to 250 consumers by e-mail from Oct 8∼23, 2009 and the data from 231 persons were analyzed in this study. Statistical, descriptive, paired samples t-test, and importance-performance analyses were conducted using SPSS WIN 17.0. According to the results, Hangwa was purchased mainly ‘for present’ (39.8%) and the main reasons for buying it were ‘traditional image’ (33.3%) and ‘taste’ (22.5%). When importance and performance of attributes considered in purchasing Hangwa were evaluated, performance was assessed to be lower than importance for all attributes. The attributes in the first quadrant with a high importance and a high performance were ‘a sanitary process’, ‘a rigorous quality mark’ and ‘taste’, which were related with quality of the products. In addition, those with a high importance but a low performance were ‘popularization through advertisement’, ‘promotion through mass media’, ‘conversion of thought on traditional foods’, ‘a reasonable price’ and ‘a wide range of price’. In conclusion, Hangwa manufacturers need to diversify products and extend the expiration date based on technologies to promote its consumption. In terms of price, Hangwa should become more available by lowering the price barrier for consumers who are sensitive to price. PMID:24471065

  11. Nonstationary statistical theory for multipactor

    SciTech Connect

    Anza, S.; Vicente, C.; Gil, J.

    2010-06-15

    This work presents a new and general approach to the real dynamics of the multipactor process: the nonstationary statistical multipactor theory. The nonstationary theory removes the stationarity assumption of the classical theory and, as a consequence, it is able to adequately model electron exponential growth as well as absorption processes, above and below the multipactor breakdown level. In addition, it considers both double-surface and single-surface interactions constituting a full framework for nonresonant polyphase multipactor analysis. This work formulates the new theory and validates it with numerical and experimental results with excellent agreement.

  12. Statistical properties of convex clustering

    PubMed Central

    Tan, Kean Ming; Witten, Daniela

    2016-01-01

    In this manuscript, we study the statistical properties of convex clustering. We establish that convex clustering is closely related to single linkage hierarchical clustering and k-means clustering. In addition, we derive the range of the tuning parameter for convex clustering that yields a non-trivial solution. We also provide an unbiased estimator of the degrees of freedom, and provide a finite sample bound for the prediction error for convex clustering. We compare convex clustering to some traditional clustering methods in simulation studies.

  13. NASA Pocket Statistics: 1997 Edition

    NASA Technical Reports Server (NTRS)

    1997-01-01

    POCKET STATISTICS is published by the NATIONAL AERONAUTICS AND SPACE ADMINISTRATION (NASA). Included in each edition is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, Aeronautics and Space Transportation and NASA Procurement, Financial and Workforce data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. All Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.

  14. Towards interoperable and reproducible QSAR analyses: Exchange of datasets

    PubMed Central

    2010-01-01

    Background QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. Results We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML) which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Conclusions Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join, extend, combine datasets

  15. Primarily Statistics: Developing an Introductory Statistics Course for Pre-Service Elementary Teachers

    ERIC Educational Resources Information Center

    Green, Jennifer L.; Blankenship, Erin E.

    2013-01-01

    We developed an introductory statistics course for pre-service elementary teachers. In this paper, we describe the goals and structure of the course, as well as the assessments we implemented. Additionally, we use example course work to demonstrate pre-service teachers' progress both in learning statistics and as novice teachers. Overall, the…

  16. Statistical analysis of single-trial Granger causality spectra.

    PubMed

    Brovelli, Andrea

    2012-01-01

    Granger causality analysis is becoming central for the analysis of interactions between neural populations and oscillatory networks. However, it is currently unclear whether single-trial estimates of Granger causality spectra can be used reliably to assess directional influence. We addressed this issue by combining single-trial Granger causality spectra with statistical inference based on general linear models. The approach was assessed on synthetic and neurophysiological data. Synthetic bivariate data was generated using two autoregressive processes with unidirectional coupling. We simulated two hypothetical experimental conditions: the first mimicked a constant and unidirectional coupling, whereas the second modelled a linear increase in coupling across trials. The statistical analysis of single-trial Granger causality spectra, based on t-tests and linear regression, successfully recovered the underlying pattern of directional influence. In addition, we characterised the minimum number of trials and coupling strengths required for significant detection of directionality. Finally, we demonstrated the relevance for neurophysiology by analysing two local field potentials (LFPs) simultaneously recorded from the prefrontal and premotor cortices of a macaque monkey performing a conditional visuomotor task. Our results suggest that the combination of single-trial Granger causality spectra and statistical inference provides a valuable tool for the analysis of large-scale cortical networks and brain connectivity.

  17. Statistical Physics of Fracture

    SciTech Connect

    Alava, Mikko; Nukala, Phani K; Zapperi, Stefano

    2006-05-01

    Disorder and long-range interactions are two of the key components that make material failure an interesting playfield for the application of statistical mechanics. The cornerstone in this respect has been lattice models of the fracture in which a network of elastic beams, bonds, or electrical fuses with random failure thresholds are subject to an increasing external load. These models describe on a qualitative level the failure processes of real, brittle, or quasi-brittle materials. This has been particularly important in solving the classical engineering problems of material strength: the size dependence of maximum stress and its sample-to-sample statistical fluctuations. At the same time, lattice models pose many new fundamental questions in statistical physics, such as the relation between fracture and phase transitions. Experimental results point out to the existence of an intriguing crackling noise in the acoustic emission and of self-affine fractals in the crack surface morphology. Recent advances in computer power have enabled considerable progress in the understanding of such models. Among these partly still controversial issues, are the scaling and size-effects in material strength and accumulated damage, the statistics of avalanches or bursts of microfailures, and the morphology of the crack surface. Here we present an overview of the results obtained with lattice models for fracture, highlighting the relations with statistical physics theories and more conventional fracture mechanics approaches.

  18. Uncertainty quantification approaches for advanced reactor analyses.

    SciTech Connect

    Briggs, L. L.; Nuclear Engineering Division

    2009-03-24

    The original approach to nuclear reactor design or safety analyses was to make very conservative modeling assumptions so as to ensure meeting the required safety margins. Traditional regulation, as established by the U. S. Nuclear Regulatory Commission required conservatisms which have subsequently been shown to be excessive. The commission has therefore moved away from excessively conservative evaluations and has determined best-estimate calculations to be an acceptable alternative to conservative models, provided the best-estimate results are accompanied by an uncertainty evaluation which can demonstrate that, when a set of analysis cases which statistically account for uncertainties of all types are generated, there is a 95% probability that at least 95% of the cases meet the safety margins. To date, nearly all published work addressing uncertainty evaluations of nuclear power plant calculations has focused on light water reactors and on large-break loss-of-coolant accident (LBLOCA) analyses. However, there is nothing in the uncertainty evaluation methodologies that is limited to a specific type of reactor or to specific types of plant scenarios. These same methodologies can be equally well applied to analyses for high-temperature gas-cooled reactors and to liquid metal reactors, and they can be applied to steady-state calculations, operational transients, or severe accident scenarios. This report reviews and compares both statistical and deterministic uncertainty evaluation approaches. Recommendations are given for selection of an uncertainty methodology and for considerations to be factored into the process of evaluating uncertainties for advanced reactor best-estimate analyses.

  19. Helping Alleviate Statistical Anxiety with Computer Aided Statistical Classes

    ERIC Educational Resources Information Center

    Stickels, John W.; Dobbs, Rhonda R.

    2007-01-01

    This study, Helping Alleviate Statistical Anxiety with Computer Aided Statistics Classes, investigated whether undergraduate students' anxiety about statistics changed when statistics is taught using computers compared to the traditional method. Two groups of students were questioned concerning their anxiety about statistics. One group was taught…

  20. Meta-analyses of randomized controlled trials.

    PubMed

    Sacks, H S; Berrier, J; Reitman, D; Ancona-Berk, V A; Chalmers, T C

    1987-02-19

    A new type of research, termed meta-analysis, attempts to analyze and combine the results of previous reports. We found 86 meta-analyses of reports of randomized controlled trials in the English-language literature. We evaluated the quality of these meta-analyses, using a scoring method that considered 23 items in six major areas--study design, combinability, control of bias, statistical analysis, sensitivity analysis, and application of results. Only 24 meta-analyses (28 percent) addressed all six areas, 31 (36 percent) addressed five, 25 (29 percent) addressed four, 5 (6 percent) addressed three, and 1 (1 percent) addressed two. Of the 23 individual items, between 1 and 14 were addressed satisfactorily (mean +/- SD, 7.7 +/- 2.7). We conclude that an urgent need exists for improved methods in literature searching, quality evaluation of trials, and synthesizing of the results.

  1. Statistical mechanics of economics I

    NASA Astrophysics Data System (ADS)

    Kusmartsev, F. V.

    2011-02-01

    We show that statistical mechanics is useful in the description of financial crisis and economics. Taking a large amount of instant snapshots of a market over an interval of time we construct their ensembles and study their statistical interference. This results in a probability description of the market and gives capital, money, income, wealth and debt distributions, which in the most cases takes the form of the Bose-Einstein distribution. In addition, statistical mechanics provides the main market equations and laws which govern the correlations between the amount of money, debt, product, prices and number of retailers. We applied the found relations to a study of the evolution of the economics in USA between the years 1996 to 2008 and observe that over that time the income of a major population is well described by the Bose-Einstein distribution which parameters are different for each year. Each financial crisis corresponds to a peak in the absolute activity coefficient. The analysis correctly indicates the past crises and predicts the future one.

  2. LDEF Satellite Radiation Analyses

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    1996-01-01

    This report covers work performed by Science Applications International Corporation (SAIC) under contract NAS8-39386 from the NASA Marshall Space Flight Center entitled LDEF Satellite Radiation Analyses. The basic objective of the study was to evaluate the accuracy of present models and computational methods for defining the ionizing radiation environment for spacecraft in Low Earth Orbit (LEO) by making comparisons with radiation measurements made on the Long Duration Exposure Facility (LDEF) satellite, which was recovered after almost six years in space. The emphasis of the work here is on predictions and comparisons with LDEF measurements of induced radioactivity and Linear Energy Transfer (LET) measurements. These model/data comparisons have been used to evaluate the accuracy of current models for predicting the flux and directionality of trapped protons for LEO missions.

  3. EEG analyses with SOBI.

    SciTech Connect

    Glickman, Matthew R.; Tang, Akaysha

    2009-02-01

    The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.

  4. XS: An Analysis and Synthesis System for Linear Regression Constructed by Integrating a Graphical Statistical System, a Relational Database System and an Expert System Shell

    PubMed Central

    Johannes, R.S.; Brown, C. Hendricks; Onstad, Lynn E.

    1989-01-01

    This paper introduces an analysis and synthesis system (XS) which aids users in performing statistical analyses. In any large study, the dataset itself grows and changes dramatically over its life-course. Important datasets are often analyzed by many people over extended periods of time. Effective analysis of these large datasets depends to a large part in integrating past inferences and analytical decisions into current analyses. XS provides statistical expertise to answer current problems, but it also makes available the results of past analyses available for potential integration and consistency checking. In addition, XS permits the integration of knowledge outside the confines of the dataset with statistical results and user input in order to make analytical decisions.

  5. Network Class Superposition Analyses

    PubMed Central

    Pearson, Carl A. B.; Zeng, Chen; Simha, Rahul

    2013-01-01

    Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., for the yeast cell cycle process [1]), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix , which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for derived from Boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with . We show how to generate Derrida plots based on . We show that -based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on . We motivate all of these results in terms of a popular molecular biology Boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for , for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141

  6. Network class superposition analyses.

    PubMed

    Pearson, Carl A B; Zeng, Chen; Simha, Rahul

    2013-01-01

    Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30) for the yeast cell cycle process), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141

  7. Uncertainty and Sensitivity Analyses Plan

    SciTech Connect

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  8. Deformed Quantum Statistics

    NASA Astrophysics Data System (ADS)

    Inomata, Akira

    1997-03-01

    To understand possible physical consequences of quantum deformation, we investigate statistical behaviors of a quon gas. The quon is an object which obeys the minimally deformed commutator (or q-mutator): a a† - q a†a=1 with -1≤ q≤ 1. Although q=1 and q=-1 appear to correspond respectively to boson and fermion statistics, it is not easy to create a gas which unifies the boson gas and the fermion gas. We present a model which is able to interpolates between the two limits. The quon gas shows the Bose-Einstein condensation near the Boson limit in two dimensions.

  9. Review of statistical methods used in enhanced-oil-recovery research and performance prediction. [131 references

    SciTech Connect

    Selvidge, J.E.

    1982-06-01

    Recent literature in the field of enhanced oil recovery (EOR) was surveyed to determine the extent to which researchers in EOR take advantage of statistical techniques in analyzing their data. In addition to determining the current level of reliance on statistical tools, another objective of this study is to promote by example the greater use of these tools. To serve this objective, the discussion of the techniques highlights the observed trend toward the use of increasingly more sophisticated methods and points out the strengths and pitfalls of different approaches. Several examples are also given of opportunities for extending EOR research findings by additional statistical manipulation. The search of the EOR literature, conducted mainly through computerized data bases, yielded nearly 200 articles containing mathematical analysis of the research. Of these, 21 were found to include examples of statistical approaches to data analysis and are discussed in detail in this review. The use of statistical techniques, as might be expected from their general purpose nature, extends across nearly all types of EOR research covering thermal methods of recovery, miscible processes, and micellar polymer floods. Data come from field tests, the laboratory, and computer simulation. The statistical methods range from simple comparisons of mean values to multiple non-linear regression equations and to probabilistic decision functions. The methods are applied to both engineering and economic data. The results of the survey are grouped by statistical technique and include brief descriptions of each of the 21 relevant papers. Complete abstracts of the papers are included in the bibliography. Brief bibliographic information (without abstracts) is also given for the articles identified in the initial search as containing mathematical analyses using other than statistical methods.

  10. 47 CFR 1.363 - Introduction of statistical data.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... analyses, and experiments, and those parts of other studies involving statistical methodology shall be.... When alternative models and variables have been employed, a record shall be kept of these...

  11. 47 CFR 1.363 - Introduction of statistical data.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... analyses, and experiments, and those parts of other studies involving statistical methodology shall be.... When alternative models and variables have been employed, a record shall be kept of these...

  12. 47 CFR 1.363 - Introduction of statistical data.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... analyses, and experiments, and those parts of other studies involving statistical methodology shall be.... When alternative models and variables have been employed, a record shall be kept of these...

  13. 47 CFR 1.363 - Introduction of statistical data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... analyses, and experiments, and those parts of other studies involving statistical methodology shall be.... When alternative models and variables have been employed, a record shall be kept of these...

  14. 47 CFR 1.363 - Introduction of statistical data.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... analyses, and experiments, and those parts of other studies involving statistical methodology shall be.... When alternative models and variables have been employed, a record shall be kept of these...

  15. Statistical insight: a review.

    PubMed

    Vardell, Emily; Garcia-Barcena, Yanira

    2012-01-01

    Statistical Insight is a database that offers the ability to search across multiple sources of data, including the federal government, private organizations, research centers, and international intergovernmental organizations in one search. Two sample searches on the same topic, a basic and an advanced, were conducted to evaluate the database.

  16. Pilot Class Testing: Statistics.

    ERIC Educational Resources Information Center

    Washington Univ., Seattle. Washington Foreign Language Program.

    Statistics derived from test score data from the pilot classes participating in the Washington Foreign Language Program are presented in tables in this report. An index accompanies the tables, itemizing the classes by level (FLES, middle, and high school), grade test, language skill, and school. MLA-Coop test performances for each class were…

  17. Statistical Reasoning over Lunch

    ERIC Educational Resources Information Center

    Selmer, Sarah J.; Bolyard, Johnna J.; Rye, James A.

    2011-01-01

    Students in the 21st century are exposed daily to a staggering amount of numerically infused media. In this era of abundant numeric data, students must be able to engage in sound statistical reasoning when making life decisions after exposure to varied information. The context of nutrition can be used to engage upper elementary and middle school…

  18. Selected Outdoor Recreation Statistics.

    ERIC Educational Resources Information Center

    Bureau of Outdoor Recreation (Dept. of Interior), Washington, DC.

    In this recreational information report, 96 tables are compiled from Bureau of Outdoor Recreation programs and surveys, other governmental agencies, and private sources. Eight sections comprise the document: (1) The Bureau of Outdoor Recreation, (2) Federal Assistance to Recreation, (3) Recreation Surveys for Planning, (4) Selected Statistics of…

  19. ASURV: Astronomical SURVival Statistics

    NASA Astrophysics Data System (ADS)

    Feigelson, E. D.; Nelson, P. I.; Isobe, T.; LaValley, M.

    2014-06-01

    ASURV (Astronomical SURVival Statistics) provides astronomy survival analysis for right- and left-censored data including the maximum-likelihood Kaplan-Meier estimator and several univariate two-sample tests, bivariate correlation measures, and linear regressions. ASURV is written in FORTRAN 77, and is stand-alone and does not call any specialized libraries.

  20. Spitball Scatterplots in Statistics

    ERIC Educational Resources Information Center

    Wagaman, John C.

    2012-01-01

    This paper describes an active learning idea that I have used in my applied statistics class as a first lesson in correlation and regression. Students propel spitballs from various standing distances from the target and use the recorded data to determine if the spitball accuracy is associated with standing distance and review the algebra of lines…

  1. Geopositional Statistical Methods

    NASA Technical Reports Server (NTRS)

    Ross, Kenton

    2006-01-01

    RMSE based methods distort circular error estimates (up to 50% overestimation). The empirical approach is the only statistically unbiased estimator offered. Ager modification to Shultz approach is nearly unbiased, but cumbersome. All methods hover around 20% uncertainty (@ 95% confidence) for low geopositional bias error estimates. This requires careful consideration in assessment of higher accuracy products.

  2. Learning Statistical Concepts

    ERIC Educational Resources Information Center

    Akram, Muhammad; Siddiqui, Asim Jamal; Yasmeen, Farah

    2004-01-01

    In order to learn the concept of statistical techniques one needs to run real experiments that generate reliable data. In practice, the data from some well-defined process or system is very costly and time consuming. It is difficult to run real experiments during the teaching period in the university. To overcome these difficulties, statisticians…

  3. Education Statistics Quarterly, 2003.

    ERIC Educational Resources Information Center

    Marenus, Barbara; Burns, Shelley; Fowler, William; Greene, Wilma; Knepper, Paula; Kolstad, Andrew; McMillen Seastrom, Marilyn; Scott, Leslie

    2003-01-01

    This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released in a 3-month period. Each issue also contains a message from the NCES on a timely…

  4. Analogies for Understanding Statistics

    ERIC Educational Resources Information Center

    Hocquette, Jean-Francois

    2004-01-01

    This article describes a simple way to explain the limitations of statistics to scientists and students to avoid the publication of misleading conclusions. Biologists examine their results extremely critically and carefully choose the appropriate analytic methods depending on their scientific objectives. However, no such close attention is usually…

  5. Statistical Significance Testing.

    ERIC Educational Resources Information Center

    McLean, James E., Ed.; Kaufman, Alan S., Ed.

    1998-01-01

    The controversy about the use or misuse of statistical significance testing has become the major methodological issue in educational research. This special issue contains three articles that explore the controversy, three commentaries on these articles, an overall response, and three rejoinders by the first three authors. They are: (1)…

  6. Statistical Analysis Techniques for Small Sample Sizes

    NASA Technical Reports Server (NTRS)

    Navard, S. E.

    1984-01-01

    The small sample sizes problem which is encountered when dealing with analysis of space-flight data is examined. Because of such a amount of data available, careful analyses are essential to extract the maximum amount of information with acceptable accuracy. Statistical analysis of small samples is described. The background material necessary for understanding statistical hypothesis testing is outlined and the various tests which can be done on small samples are explained. Emphasis is on the underlying assumptions of each test and on considerations needed to choose the most appropriate test for a given type of analysis.

  7. Thermodynamics and Statistical Mechanics of Macromolecular Systems

    NASA Astrophysics Data System (ADS)

    Bachmann, Michael

    2014-04-01

    Preface and outline; 1. Introduction; 2. Statistical mechanics: a modern review; 3. The complexity of minimalistic lattice models for protein folding; 4. Monte Carlo and chain growth methods for molecular simulations; 5. First insights to freezing and collapse of flexible polymers; 6. Crystallization of elastic polymers; 7. Structural phases of semiflexible polymers; 8. Generic tertiary folding properties of proteins in mesoscopic scales; 9. Protein folding channels and kinetics of two-state folding; 10. Inducing generic secondary structures by constraints; 11. Statistical analyses of aggregation processes; 12. Hierarchical nature of phase transitions; 13. Adsorption of polymers at solid substrates; 14. Hybrid protein-substrate interfaces; 15. Concluding remarks and outlook; Notes; References; Index.

  8. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    PubMed Central

    West, Brady T.; Sakshaug, Joseph W.; Aurelien, Guy Alain S.

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data. PMID:27355817

  9. Psychotherapist effects in meta-analyses: How accurate are treatment effects?

    PubMed

    Owen, Jesse; Drinane, Joanna M; Idigo, K Chinwe; Valentine, Jeffrey C

    2015-09-01

    Psychotherapists are known to vary in their effectiveness with their clients, in randomized clinical trials as well as naturally occurring treatment settings. The fact that therapists matter has 2 effects in psychotherapy studies. First, if therapists are not randomly assigned to modalities (which is rare) this may bias the estimation of the treatment effects, as the modalities may have therapists of differing skill. In addition, if the data are analyzed at the client level (which is virtually always the case) then the standard errors for the effect sizes will be biased due to a violation of the assumption of independence. Thus, the conclusions of many meta-analyses may not reflect true estimates of treatment differences. We reexamined 20 treatment effects selected from 17 meta-analyses. We focused on meta-analyses that found statistically significant differences between treatments for a variety of disorders by correcting the treatment effects according to the variability in outcomes known to be associated with psychotherapists. The results demonstrated that after adjusting the results based on most small estimates of therapist effects, ∼80% of the reported treatment effects would still be statistically significant. However, at larger estimates, only 20% of the treatment effects would still be statistically significant after controlling for therapist effects. Although some meta-analyses were consistent in their estimates for treatment differences, the degree of certainty in the results was considerably reduced after considering therapist effects. Practice implications for understanding treatment effects, namely, therapist effects, in meta-analyses and original studies are provided. PMID:26301423

  10. Polyimide processing additives

    NASA Technical Reports Server (NTRS)

    Fletcher, James C. (Inventor); Pratt, J. Richard (Inventor); St.clair, Terry L. (Inventor); Stoakley, Diane M. (Inventor); Burks, Harold D. (Inventor)

    1992-01-01

    A process for preparing polyimides having enhanced melt flow properties is described. The process consists of heating a mixture of a high molecular weight poly-(amic acid) or polyimide with a low molecular weight amic acid or imide additive in the range of 0.05 to 15 percent by weight of additive. The polyimide powders so obtained show improved processability, as evidenced by lower melt viscosity by capillary rheometry. Likewise, films prepared from mixtures of polymers with additives show improved processability with earlier onset of stretching by TMA.

  11. Polyimide processing additives

    NASA Technical Reports Server (NTRS)

    Pratt, J. Richard (Inventor); St.clair, Terry L. (Inventor); Stoakley, Diane M. (Inventor); Burks, Harold D. (Inventor)

    1993-01-01

    A process for preparing polyimides having enhanced melt flow properties is described. The process consists of heating a mixture of a high molecular weight poly-(amic acid) or polyimide with a low molecular weight amic acid or imide additive in the range of 0.05 to 15 percent by weight of the additive. The polyimide powders so obtained show improved processability, as evidenced by lower melt viscosity by capillary rheometry. Likewise, films prepared from mixtures of polymers with additives show improved processability with earlier onset of stretching by TMA.

  12. Tips and Tricks for Successful Application of Statistical Methods to Biological Data.

    PubMed

    Schlenker, Evelyn

    2016-01-01

    This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.

  13. SHARE: Statistical hadronization with resonances

    NASA Astrophysics Data System (ADS)

    Torrieri, G.; Steinke, S.; Broniowski, W.; Florkowski, W.; Letessier, J.; Rafelski, J.

    2005-05-01

    interaction feed-down corrections, the observed hadron abundances are obtained. SHARE incorporates diverse physical approaches, with a flexibility of choice of the details of the statistical hadronization model, including the selection of a chemical (non-)equilibrium condition. SHARE also offers evaluation of the extensive properties of the source of particles, such as energy, entropy, baryon number, strangeness, as well as the determination of the best intensive input parameters fitting a set of experimental yields. This allows exploration of a proposed physical hypothesis about hadron production mechanisms and the determination of the properties of their source. Method of solving the problem: Distributions at freeze-out of both the stable particles and the hadronic resonances are set according to a statistical prescription, technically calculated via a series of Bessel functions, using CERN library programs. We also have the option of including finite particle widths of the resonances. While this is computationally expensive, it is necessary to fully implement the essence of the strong interaction dynamics within the statistical hadronization picture. In fact, including finite width has a considerable effect when modeling directly detectable short-lived resonances ( Λ(1520),K, etc.), and is noticeable in fits to experimentally measured yields of stable particles. After production, all hadronic resonances decay. Resonance decays are accomplished by addition of the parent abundances to the daughter, normalized by the branching ratio. Weak interaction decays receive a special treatment, where we introduce daughter particle acceptance factors for both strongly interacting decay products. An interface for fitting to experimental particle ratios of the statistical model parameters with the help of MINUIT[1] is provided. The χ function is defined in the standard way. For an investigated quantity f and experimental error Δ f, χ=((N=N-N. (note that systematic and statistical

  14. Combining Statistical Tools and Ecological Assessments in the Study of Biodeterioration Patterns of Stone Temples in Angkor (Cambodia).

    PubMed

    Caneva, G; Bartoli, F; Savo, V; Futagami, Y; Strona, G

    2016-01-01

    Biodeterioration is a major problem for the conservation of cultural heritage materials. We provide a new and original approach to analyzing changes in patterns of colonization (Biodeterioration patterns, BPs) by biological agents responsible for the deterioration of outdoor stone materials. Here we analyzed BPs of four Khmer temples in Angkor (Cambodia) exposed to variable environmental conditions, using qualitative ecological assessments and statistical approaches. The statistical analyses supported the findings obtained with the qualitative approach. Both approaches provided additional information not otherwise available using one single method. Our results indicate that studies on biodeterioration can benefit from integrating diverse methods so that conservation efforts might become more precise and effective. PMID:27597658

  15. Combining Statistical Tools and Ecological Assessments in the Study of Biodeterioration Patterns of Stone Temples in Angkor (Cambodia)

    NASA Astrophysics Data System (ADS)

    Caneva, G.; Bartoli, F.; Savo, V.; Futagami, Y.; Strona, G.

    2016-09-01

    Biodeterioration is a major problem for the conservation of cultural heritage materials. We provide a new and original approach to analyzing changes in patterns of colonization (Biodeterioration patterns, BPs) by biological agents responsible for the deterioration of outdoor stone materials. Here we analyzed BPs of four Khmer temples in Angkor (Cambodia) exposed to variable environmental conditions, using qualitative ecological assessments and statistical approaches. The statistical analyses supported the findings obtained with the qualitative approach. Both approaches provided additional information not otherwise available using one single method. Our results indicate that studies on biodeterioration can benefit from integrating diverse methods so that conservation efforts might become more precise and effective.

  16. Combining Statistical Tools and Ecological Assessments in the Study of Biodeterioration Patterns of Stone Temples in Angkor (Cambodia)

    PubMed Central

    Caneva, G.; Bartoli, F.; Savo, V.; Futagami, Y.; Strona, G.

    2016-01-01

    Biodeterioration is a major problem for the conservation of cultural heritage materials. We provide a new and original approach to analyzing changes in patterns of colonization (Biodeterioration patterns, BPs) by biological agents responsible for the deterioration of outdoor stone materials. Here we analyzed BPs of four Khmer temples in Angkor (Cambodia) exposed to variable environmental conditions, using qualitative ecological assessments and statistical approaches. The statistical analyses supported the findings obtained with the qualitative approach. Both approaches provided additional information not otherwise available using one single method. Our results indicate that studies on biodeterioration can benefit from integrating diverse methods so that conservation efforts might become more precise and effective. PMID:27597658

  17. "Just Another Statistic"

    PubMed

    Machtay; Glatstein

    1998-01-01

    On returning from a medical meeting, we learned that sadly a patient, "Mr. B.," had passed away. His death was a completely unexpected surprise. He had been doing well nine months after a course of intensive radiotherapy for a locally advanced head and neck cancer; in his most recent follow-up notes, he was described as a "complete remission." Nonetheless, he apparently died peacefully in his sleep from a cardiac arrest one night and was found the next day by a concerned neighbor. In our absence, after Mr. B. expired, his death certificate was filled out by a physician who didn't know him in detail, but did know why he recently was treated in our department. The cause of death was listed as head and neck cancer. It wasn't long after his death before we began to receive those notorious "requests for additional information," letters from the statistical office of a well-known cooperative group. Mr. B., as it turns out, was on a clinical trial, and it was "vital" to know further details of the circumstances of his passing. Perhaps this very large cancer had been controlled and Mr. B. succumbed to old age (helped along by the tobacco industry). On the other hand, maybe the residual "fibrosis" in his neck was actually packed with active tumor and his left carotid artery was finally 100% pinched off, or maybe he suffered a massive pulmonary embolism from cancer-related hypercoagulability. The forms and requests were completed with a succinct "cause of death uncertain," adding, "please have the Study Chairs call to discuss this difficult case." Often clinical reports of outcomes utilize and emphasize the endpoint "disease specific survival" (DSS). Like overall survival (OS), the DSS can be calculated by actuarial methods, with patients who have incomplete follow-up "censored" at the time of last follow-up pending further information. In the DSS, however, deaths unrelated to the index cancer of interest are censored at the time of death; thus, a death from intercurrent

  18. Additional Types of Neuropathy

    MedlinePlus

    ... A A Listen En Español Additional Types of Neuropathy Charcot's Joint Charcot's Joint, also called neuropathic arthropathy, ... can stop bone destruction and aid healing. Cranial Neuropathy Cranial neuropathy affects the 12 pairs of nerves ...

  19. Food Additives and Hyperkinesis

    ERIC Educational Resources Information Center

    Wender, Ester H.

    1977-01-01

    The hypothesis that food additives are causally associated with hyperkinesis and learning disabilities in children is reviewed, and available data are summarized. Available from: American Medical Association 535 North Dearborn Street Chicago, Illinois 60610. (JG)

  20. Smog control fuel additives

    SciTech Connect

    Lundby, W.

    1993-06-29

    A method is described of controlling, reducing or eliminating, ozone and related smog resulting from photochemical reactions between ozone and automotive or industrial gases comprising the addition of iodine or compounds of iodine to hydrocarbon-base fuels prior to or during combustion in an amount of about 1 part iodine per 240 to 10,000,000 parts fuel, by weight, to be accomplished by: (a) the addition of these inhibitors during or after the refining or manufacturing process of liquid fuels; (b) the production of these inhibitors for addition into fuel tanks, such as automotive or industrial tanks; or (c) the addition of these inhibitors into combustion chambers of equipment utilizing solid fuels for the purpose of reducing ozone.

  1. The Statistical Fermi Paradox

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in

  2. The Statistical Drake Equation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2010-12-01

    We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density

  3. Modern Statistical Graphs that Provide Insight in Research Results

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Modern statistical graphics offer insight in assessing the results of many common statistical analyses. These ideas, however, are rarely employed in agronomic research articles. This work presents several commonly used graphs, offers one or more alternatives for each, and provides the reasons for pr...

  4. A Taste of Asia with Statistics and Technology

    ERIC Educational Resources Information Center

    Reid, Josh; Carmichael, Colin

    2015-01-01

    Josh Reid and Colin Carmichael describe how some Year 6 children have developed their understanding of mathematics by studying Asian countries. The statistical analyses undertaken by these children appears to have strengthened their understanding of statistical concepts and at the same time provided them with tools for understanding complex…

  5. Indian Ocean analyses

    NASA Technical Reports Server (NTRS)

    Meyers, Gary

    1992-01-01

    The background and goals of Indian Ocean thermal sampling are discussed from the perspective of a national project which has research goals relevant to variation of climate in Australia. The critical areas of SST variation are identified. The first goal of thermal sampling at this stage is to develop a climatology of thermal structure in the areas and a description of the annual variation of major currents. The sampling strategy is reviewed. Dense XBT sampling is required to achieve accurate, monthly maps of isotherm-depth because of the high level of noise in the measurements caused by aliasing of small scale variation. In the Indian Ocean ship routes dictate where adequate sampling can be achieved. An efficient sampling rate on available routes is determined based on objective analysis. The statistical structure required for objective analysis is described and compared at 95 locations in the tropical Pacific and 107 in the tropical Indian Oceans. XBT data management and quality control methods at CSIRO are reviewed. Results on the mean and annual variation of temperature and baroclinic structure in the South Equatorial Current and Pacific/Indian Ocean Throughflow are presented for the region between northwest Australia and Java-Timor. The mean relative geostrophic transport (0/400 db) of Throughflow is approximately 5 x 106 m3/sec. A nearly equal volume transport is associated with the reference velocity at 400 db. The Throughflow feeds the South Equatorial Current, which has maximum westward flow in August/September, at the end of the southeasterly Monsoon season. A strong semiannual oscillation in the South Java Current is documented. The results are in good agreement with the Semtner and Chervin (1988) ocean general circulation model. The talk concludes with comments on data inadequacies (insufficient coverage, timeliness) particular to the Indian Ocean and suggestions on the future role that can be played by Data Centers, particularly with regard to quality

  6. Statistical region merging.

    PubMed

    Nock, Richard; Nielsen, Frank

    2004-11-01

    This paper explores a statistical basis for a process often described in computer vision: image segmentation by region merging following a particular order in the choice of regions. We exhibit a particular blend of algorithmics and statistics whose segmentation error is, as we show, limited from both the qualitative and quantitative standpoints. This approach can be efficiently approximated in linear time/space, leading to a fast segmentation algorithm tailored to processing images described using most common numerical pixel attribute spaces. The conceptual simplicity of the approach makes it simple to modify and cope with hard noise corruption, handle occlusion, authorize the control of the segmentation scale, and process unconventional data such as spherical images. Experiments on gray-level and color images, obtained with a short readily available C-code, display the quality of the segmentations obtained.

  7. Modeling cosmic void statistics

    NASA Astrophysics Data System (ADS)

    Hamaus, Nico; Sutter, P. M.; Wandelt, Benjamin D.

    2016-10-01

    Understanding the internal structure and spatial distribution of cosmic voids is crucial when considering them as probes of cosmology. We present recent advances in modeling void density- and velocity-profiles in real space, as well as void two-point statistics in redshift space, by examining voids identified via the watershed transform in state-of-the-art ΛCDM n-body simulations and mock galaxy catalogs. The simple and universal characteristics that emerge from these statistics indicate the self-similarity of large-scale structure and suggest cosmic voids to be among the most pristine objects to consider for future studies on the nature of dark energy, dark matter and modified gravity.

  8. Statistical evaluation of forecasts

    NASA Astrophysics Data System (ADS)

    Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J.; Timmer, Jens; Schelter, Björn

    2014-08-01

    Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast.

  9. 49 CFR 1180.7 - Market analyses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... identify and address relevant markets and issues, and provide additional information as requested by the...). (b) For major transactions, applicants shall submit “full system” impact analyses (incorporating any... (including inter- and intramodal competition, product competition, and geographic competition) and...

  10. Which statistics should tropical biologists learn?

    PubMed

    Loaiza Velásquez, Natalia; González Lutz, María Isabel; Monge-Nájera, Julián

    2011-09-01

    Tropical biologists study the richest and most endangered biodiversity in the planet, and in these times of climate change and mega-extinctions, the need for efficient, good quality research is more pressing than in the past. However, the statistical component in research published by tropical authors sometimes suffers from poor quality in data collection; mediocre or bad experimental design and a rigid and outdated view of data analysis. To suggest improvements in their statistical education, we listed all the statistical tests and other quantitative analyses used in two leading tropical journals, the Revista de Biología Tropical and Biotropica, during a year. The 12 most frequent tests in the articles were: Analysis of Variance (ANOVA), Chi-Square Test, Student's T Test, Linear Regression, Pearson's Correlation Coefficient, Mann-Whitney U Test, Kruskal-Wallis Test, Shannon's Diversity Index, Tukey's Test, Cluster Analysis, Spearman's Rank Correlation Test and Principal Component Analysis. We conclude that statistical education for tropical biologists must abandon the old syllabus based on the mathematical side of statistics and concentrate on the correct selection of these and other procedures and tests, on their biological interpretation and on the use of reliable and friendly freeware. We think that their time will be better spent understanding and protecting tropical ecosystems than trying to learn the mathematical foundations of statistics: in most cases, a well designed one-semester course should be enough for their basic requirements.

  11. Journey Through Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Yang, C. N.

    2013-05-01

    My first involvement with statistical mechanics and the many body problem was when I was a student at The National Southwest Associated University in Kunming during the war. At that time Professor Wang Zhu-Xi had just come back from Cambridge, England, where he was a student of Fowler, and his thesis was on phase transitions, a hot topic at that time, and still a very hot topic today...

  12. Statistical Methods in Cosmology

    NASA Astrophysics Data System (ADS)

    Verde, L.

    2010-03-01

    The advent of large data-set in cosmology has meant that in the past 10 or 20 years our knowledge and understanding of the Universe has changed not only quantitatively but also, and most importantly, qualitatively. Cosmologists rely on data where a host of useful information is enclosed, but is encoded in a non-trivial way. The challenges in extracting this information must be overcome to make the most of a large experimental effort. Even after having converged to a standard cosmological model (the LCDM model) we should keep in mind that this model is described by 10 or more physical parameters and if we want to study deviations from it, the number of parameters is even larger. Dealing with such a high dimensional parameter space and finding parameters constraints is a challenge on itself. Cosmologists want to be able to compare and combine different data sets both for testing for possible disagreements (which could indicate new physics) and for improving parameter determinations. Finally, cosmologists in many cases want to find out, before actually doing the experiment, how much one would be able to learn from it. For all these reasons, sophisiticated statistical techniques are being employed in cosmology, and it has become crucial to know some statistical background to understand recent literature in the field. I will introduce some statistical tools that any cosmologist should know about in order to be able to understand recently published results from the analysis of cosmological data sets. I will not present a complete and rigorous introduction to statistics as there are several good books which are reported in the references. The reader should refer to those.

  13. Statistics of entrance times

    NASA Astrophysics Data System (ADS)

    Talkner, Peter

    2003-07-01

    The statistical properties of the transitions of a discrete Markov process are investigated in terms of entrance times. A simple formula for their density is given and used to measure the synchronization of a process with a periodic driving force. For the McNamara-Wiesenfeld model of stochastic resonance we find parameter regions in which the transition frequency of the process is locked with the frequency of the external driving.

  14. 1979 DOE statistical symposium

    SciTech Connect

    Gardiner, D.A.; Truett T.

    1980-09-01

    The 1979 DOE Statistical Symposium was the fifth in the series of annual symposia designed to bring together statisticians and other interested parties who are actively engaged in helping to solve the nation's energy problems. The program included presentations of technical papers centered around exploration and disposal of nuclear fuel, general energy-related topics, and health-related issues, and workshops on model evaluation, risk analysis, analysis of large data sets, and resource estimation.

  15. Additive Manufacturing Infrared Inspection

    NASA Technical Reports Server (NTRS)

    Gaddy, Darrell

    2014-01-01

    Additive manufacturing is a rapid prototyping technology that allows parts to be built in a series of thin layers from plastic, ceramics, and metallics. Metallic additive manufacturing is an emerging form of rapid prototyping that allows complex structures to be built using various metallic powders. Significant time and cost savings have also been observed using the metallic additive manufacturing compared with traditional techniques. Development of the metallic additive manufacturing technology has advanced significantly over the last decade, although many of the techniques to inspect parts made from these processes have not advanced significantly or have limitations. Several external geometry inspection techniques exist such as Coordinate Measurement Machines (CMM), Laser Scanners, Structured Light Scanning Systems, or even traditional calipers and gages. All of the aforementioned techniques are limited to external geometry and contours or must use a contact probe to inspect limited internal dimensions. This presentation will document the development of a process for real-time dimensional inspection technique and digital quality record of the additive manufacturing process using Infrared camera imaging and processing techniques.

  16. Descriptive and Experimental Analyses of Potential Precursors to Problem Behavior

    ERIC Educational Resources Information Center

    Borrero, Carrie S. W.; Borrero, John C.

    2008-01-01

    We conducted descriptive observations of severe problem behavior for 2 individuals with autism to identify precursors to problem behavior. Several comparative probability analyses were conducted in addition to lag-sequential analyses using the descriptive data. Results of the descriptive analyses showed that the probability of the potential…

  17. What can we do about exploratory analyses in clinical trials?

    PubMed

    Moyé, Lem

    2015-11-01

    The research community has alternatively embraced then repudiated exploratory analyses since the inception of clinical trials in the middle of the twentieth century. After a series of important but ultimately unreproducible findings, these non-prospectively declared evaluations were relegated to hypothesis generating. Since the majority of evaluations conducted in clinical trials with their rich data sets are exploratory, the absence of their persuasive power adds to the inefficiency of clinical trial analyses in an atmosphere of fiscal frugality. However, the principle argument against exploratory analyses is not based in statistical theory, but pragmatism and observation. The absence of any theoretical treatment of exploratory analyses postpones the day when their statistical weaknesses might be repaired. Here, we introduce examination of the characteristics of exploratory analyses from a probabilistic and statistical framework. Setting the obvious logistical concerns aside (i.e., the absence of planning produces poor precision), exploratory analyses do not appear to suffer from estimation theory weaknesses. The problem appears to be a difficulty in what is actually reported as the p-value. The use of Bayes Theorem provides p-values that are more in line with confirmatory analyses. This development may inaugurate a body of work that would lead to the readmission of exploratory analyses to a position of persuasive power in clinical trials.

  18. Data-Intensive Statistical Computations in Astronomy

    NASA Astrophysics Data System (ADS)

    Szalay, Alex

    2010-01-01

    The emerging large datasets are posing major challenges for their subsequent statistical analyses. One needs reinvent optimal statistical algorithms, where the cost of computing is taken into account. Moving large amounts of data is becoming increasingly untenable, thus our computations must be performed close to the data. Existing computer architectures are CPU-heavy, while the first passes of most data analyses require an extreme I/O bandwidth. Novel computational algorithms, optimized for extreme datasets, and the new, data-intensive architectures must be invented. The outputs of large numerical simulations increasingly resemble the "observable” universe, with data volumes are approaching if not exceeding observational data. Persistent "laboratories” of numerical experiments will soon be publicly available, and will change the way we approach the comparisons of observational data to first principle simulations.

  19. Quantum U-statistics

    SciTech Connect

    Guta, Madalin; Butucea, Cristina

    2010-10-15

    The notion of a U-statistic for an n-tuple of identical quantum systems is introduced in analogy to the classical (commutative) case: given a self-adjoint 'kernel' K acting on (C{sup d}){sup '}x{sup r} with rstatistics converges in moments to a linear combination of Hermite polynomials in canonical variables of a canonical commutation relation algebra defined through the quantum central limit theorem. In the special cases of nondegenerate kernels and kernels of order of 2, it is shown that the convergence holds in the stronger distribution sense. Two types of applications in quantum statistics are described: testing beyond the two simple hypotheses scenario and quantum metrology with interacting Hamiltonians.

  20. Statistical Inference at Work: Statistical Process Control as an Example

    ERIC Educational Resources Information Center

    Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia

    2008-01-01

    To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…

  1. Phenylethynyl Containing Reactive Additives

    NASA Technical Reports Server (NTRS)

    Connell, John W. (Inventor); Smith, Joseph G., Jr. (Inventor); Hergenrother, Paul M. (Inventor)

    2002-01-01

    Phenylethynyl containing reactive additives were prepared from aromatic diamine, containing phenylethvnvl groups and various ratios of phthalic anhydride and 4-phenylethynviphthalic anhydride in glacial acetic acid to form the imide in one step or in N-methyl-2-pvrrolidinone to form the amide acid intermediate. The reactive additives were mixed in various amounts (10% to 90%) with oligomers containing either terminal or pendent phenylethynyl groups (or both) to reduce the melt viscosity and thereby enhance processability. Upon thermal cure, the additives react and become chemically incorporated into the matrix and effect an increase in crosslink density relative to that of the host resin. This resultant increase in crosslink density has advantageous consequences on the cured resin properties such as higher glass transition temperature and higher modulus as compared to that of the host resin.

  2. ISSUES IN THE STATISTICAL ANALYSIS OF SMALL-AREA HEALTH DATA. (R825173)

    EPA Science Inventory

    The availability of geographically indexed health and population data, with advances in computing, geographical information systems and statistical methodology, have opened the way for serious exploration of small area health statistics based on routine data. Such analyses may be...

  3. Oil spills, 1976-85: statistical report

    SciTech Connect

    Cotton, U.

    1986-01-01

    Oil spillage resulting from drilling and production activities on the Outer Continental Shelf (OCS) is a matter of wide public concern. This report provides a complete and accurate statistical account of all the oil spills reported relative to oil and gas operations on the OCS. The report is prepared especially with those in mind who are concerned with accident prevention utilizing the various kinds of analyses and preaccident investigation techniques. Data are presented in tables and figures showing relationships between number of spills, volume of spills, accident years, cause of spills, location of spills, type of operations, and amount of production. The statistics in the tables and graphs are derived primarily from the OCS Events File. They summarize the spills that MMS has recorded. The statistics are not an evaluation of the systems that allowed the accidents to occur.

  4. Statistics in clinical research: Important considerations

    PubMed Central

    Barkan, Howard

    2015-01-01

    Statistical analysis is one of the foundations of evidence-based clinical practice, a key in conducting new clinical research and in evaluating and applying prior research. In this paper, we review the choice of statistical procedures, analyses of the associations among variables and techniques used when the clinical processes being examined are still in process. We discuss methods for building predictive models in clinical situations, and ways to assess the stability of these models and other quantitative conclusions. Techniques for comparing independent events are distinguished from those used with events in a causal chain or otherwise linked. Attention then turns to study design, to the determination of the sample size needed to make a given comparison, and to statistically negative studies. PMID:25566715

  5. Multifunctional fuel additives

    SciTech Connect

    Baillargeon, D.J.; Cardis, A.B.; Heck, D.B.

    1991-03-26

    This paper discusses a composition comprising a major amount of a liquid hydrocarbyl fuel and a minor low-temperature flow properties improving amount of an additive product of the reaction of a suitable diol and product of a benzophenone tetracarboxylic dianhydride and a long-chain hydrocarbyl aminoalcohol.

  6. Biobased lubricant additives

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Fully biobased lubricants are those formulated using all biobased ingredients, i.e. biobased base oils and biobased additives. Such formulations provide the maximum environmental, safety, and economic benefits expected from a biobased product. Currently, there are a number of biobased base oils that...

  7. Statistical Treatment of Looking-Time Data

    PubMed Central

    2016-01-01

    Looking times (LTs) are frequently measured in empirical research on infant cognition. We analyzed the statistical distribution of LTs across participants to develop recommendations for their treatment in infancy research. Our analyses focused on a common within-subject experimental design, in which longer looking to novel or unexpected stimuli is predicted. We analyzed data from 2 sources: an in-house set of LTs that included data from individual participants (47 experiments, 1,584 observations), and a representative set of published articles reporting group-level LT statistics (149 experiments from 33 articles). We established that LTs are log-normally distributed across participants, and therefore, should always be log-transformed before parametric statistical analyses. We estimated the typical size of significant effects in LT studies, which allowed us to make recommendations about setting sample sizes. We show how our estimate of the distribution of effect sizes of LT studies can be used to design experiments to be analyzed by Bayesian statistics, where the experimenter is required to determine in advance the predicted effect size rather than the sample size. We demonstrate the robustness of this method in both sets of LT experiments. PMID:26845505

  8. Axiomatic nonextensive statistics at NICA energies

    NASA Astrophysics Data System (ADS)

    Nasser Tawfik, Abdel

    2016-08-01

    We discuss the possibility of implementing axiomatic nonextensive statistics, where it is conjectured that the phase-space volume determines the (non)extensive entropy, on the particle production at NICA energies. Both Boltzmann-Gibbs and Tsallis statistics are very special cases of this generic (non)extensivity. We conclude that the lattice thermodynamics is ab initio extensive and additive and thus the nonextensive approaches including Tsallis statistics categorically are not matching with them, while the particle production, for instance the particle ratios at various center-of-mass energies, is likely a nonextensive process but certainly not of Tsallis type. The resulting freezeout parameters, the temperature and the chemical potentials, are approximately compatible with the ones deduced from Boltzmann-Gibbs statistics.

  9. Statistical design for microwave systems

    NASA Technical Reports Server (NTRS)

    Cooke, Roland; Purviance, John

    1991-01-01

    This paper presents an introduction to statistical system design. Basic ideas needed to understand statistical design and a method for implementing statistical design are presented. The nonlinear characteristics of the system amplifiers and mixers are accounted for in the given examples. The specification of group delay, signal-to-noise ratio and output power are considered in these statistical designs.

  10. Experimental Mathematics and Computational Statistics

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2009-04-30

    The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.

  11. Informal Statistics Help Desk

    NASA Technical Reports Server (NTRS)

    Ploutz-Snyder, R. J.; Feiveson, A. H.

    2015-01-01

    Back by popular demand, the JSC Biostatistics Lab is offering an opportunity for informal conversation about challenges you may have encountered with issues of experimental design, analysis, data visualization or related topics. Get answers to common questions about sample size, repeated measures, violation of distributional assumptions, missing data, multiple testing, time-to-event data, when to trust the results of your analyses (reproducibility issues) and more.

  12. Who Needs Statistics? | Poster

    Cancer.gov

    You may know the feeling. You have collected a lot of new data on an important experiment. Now you are faced with multiple groups of data, a sea of numbers, and a deadline for submitting your paper to a peer-reviewed journal. And you are not sure which data are relevant, or even the best way to present them. The statisticians at Data Management Services (DMS) know how to help. This small group of experts provides a wide array of statistical and mathematical consulting services to the scientific community at NCI at Frederick and NCI-Bethesda.

  13. Statistical physics and ecology

    NASA Astrophysics Data System (ADS)

    Volkov, Igor

    This work addresses the applications of the methods of statistical physics to problems in population ecology. A theoretical framework based on stochastic Markov processes for the unified neutral theory of biodiversity is presented and an analytical solution for the distribution of the relative species abundance distribution both in the large meta-community and in the small local community is obtained. It is shown that the framework of the current neutral theory in ecology can be easily generalized to incorporate symmetric density dependence. An analytically tractable model is studied that provides an accurate description of beta-diversity and exhibits novel scaling behavior that leads to links between ecological measures such as relative species abundance and the species area relationship. We develop a simple framework that incorporates the Janzen-Connell, dispersal and immigration effects and leads to a description of the distribution of relative species abundance, the equilibrium species richness, beta-diversity and the species area relationship, in good accord with data. Also it is shown that an ecosystem can be mapped into an unconventional statistical ensemble and is quite generally tuned in the vicinity of a phase transition where bio-diversity and the use of resources are optimized. We also perform a detailed study of the unconventional statistical ensemble, in which, unlike in physics, the total number of particles and the energy are not fixed but bounded. We show that the temperature and the chemical potential play a dual role: they determine the average energy and the population of the levels in the system and at the same time they act as an imbalance between the energy and population ceilings and the corresponding average values. Different types of statistics (Boltzmann, Bose-Einstein, Fermi-Dirac and one corresponding to the description of a simple ecosystem) are considered. In all cases, we show that the systems may undergo a first or a second order

  14. International petroleum statistics report

    SciTech Connect

    1995-10-01

    The International Petroleum Statistics Report is a monthly publication that provides current international oil data. This report presents data on international oil production, demand, imports, exports and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). Section 2 presents an oil supply/demand balance for the world, in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries.

  15. Statistics of Sxy estimates

    NASA Technical Reports Server (NTRS)

    Freilich, M. H.; Pawka, S. S.

    1987-01-01

    The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.

  16. Teaching Statistics in Biology: Using Inquiry-based Learning to Strengthen Understanding of Statistical Analysis in Biology Laboratory Courses

    PubMed Central

    2008-01-01

    There is an increasing need for students in the biological sciences to build a strong foundation in quantitative approaches to data analyses. Although most science, engineering, and math field majors are required to take at least one statistics course, statistical analysis is poorly integrated into undergraduate biology course work, particularly at the lower-division level. Elements of statistics were incorporated into an introductory biology course, including a review of statistics concepts and opportunity for students to perform statistical analysis in a biological context. Learning gains were measured with an 11-item statistics learning survey instrument developed for the course. Students showed a statistically significant 25% (p < 0.005) increase in statistics knowledge after completing introductory biology. Students improved their scores on the survey after completing introductory biology, even if they had previously completed an introductory statistics course (9%, improvement p < 0.005). Students retested 1 yr after completing introductory biology showed no loss of their statistics knowledge as measured by this instrument, suggesting that the use of statistics in biology course work may aid long-term retention of statistics knowledge. No statistically significant differences in learning were detected between male and female students in the study. PMID:18765754

  17. Local analyses of Planck maps with Minkowski functionals

    NASA Astrophysics Data System (ADS)

    Novaes, C. P.; Bernui, A.; Marques, G. A.; Ferreira, I. S.

    2016-09-01

    Minkowski functionals (MF) are excellent tools to investigate the statistical properties of the cosmic background radiation (CMB) maps. Between their notorious advantages is the possibility to use them efficiently in patches of the CMB sphere, which allow studies in masked skies, inclusive analyses of small sky regions. Then, possible deviations from Gaussianity are investigated by comparison with MF obtained from a set of Gaussian isotropic simulated CMB maps to which are applied the same cut-sky masks. These analyses are sensitive enough to detect contaminations of small intensity like primary and secondary CMB anisotropies. Our methodology uses the MF, widely employed to study non-Gaussianities in CMB data, and asserts Gaussian deviations only when all of them points out an exceptional χ2 value, at more than 2.2σ confidence level, in a given sky patch. Following this rigorous procedure, we find 13 regions in the foreground-cleaned Planck maps that evince such high levels of non-Gaussian deviations. According to our results, these non-Gaussian contributions show signatures that can be associated to the presence of hot or cold spots in such regions. Moreover, some of these non-Gaussian deviations signals suggest the presence of foreground residuals in those regions located near the Galactic plane. Additionally, we confirm that most of the regions revealed in our analyses, but not all, have been recently reported in studies done by the Planck collaboration. Furthermore, we also investigate whether these non-Gaussian deviations can be possibly sourced by systematics, like inhomogeneous noise and beam effect in the released Planck data, or perhaps due to residual Galactic foregrounds.

  18. 14 CFR 437.77 - Additional safety requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., DEPARTMENT OF TRANSPORTATION LICENSING EXPERIMENTAL PERMITS Safety Requirements § 437.77 Additional safety... solid propellants. The FAA may also require the permittee to conduct additional analyses of the cause...

  19. Vinyl capped addition polyimides

    NASA Technical Reports Server (NTRS)

    Vannucci, Raymond D. (Inventor); Malarik, Diane C. (Inventor); Delvigs, Peter (Inventor)

    1991-01-01

    Polyimide resins (PMR) are generally useful where high strength and temperature capabilities are required (at temperatures up to about 700 F). Polyimide resins are particularly useful in applications such as jet engine compressor components, for example, blades, vanes, air seals, air splitters, and engine casing parts. Aromatic vinyl capped addition polyimides are obtained by reacting a diamine, an ester of tetracarboxylic acid, and an aromatic vinyl compound. Low void materials with improved oxidative stability when exposed to 700 F air may be fabricated as fiber reinforced high molecular weight capped polyimide composites. The aromatic vinyl capped polyimides are provided with a more aromatic nature and are more thermally stable than highly aliphatic, norbornenyl-type end-capped polyimides employed in PMR resins. The substitution of aromatic vinyl end-caps for norbornenyl end-caps in addition polyimides results in polymers with improved oxidative stability.

  20. Tackifier for addition polyimides

    NASA Technical Reports Server (NTRS)

    Butler, J. M.; St.clair, T. L.

    1980-01-01

    A modification to the addition polyimide, LaRC-160, was prepared to improve tack and drape and increase prepeg out-time. The essentially solventless, high viscosity laminating resin is synthesized from low cost liquid monomers. The modified version takes advantage of a reactive, liquid plasticizer which is used in place of solvent and helps solve a major problem of maintaining good prepeg tack and drape, or the ability of the prepeg to adhere to adjacent plies and conform to a desired shape during the lay up process. This alternate solventless approach allows both longer life of the polymer prepeg and the processing of low void laminates. This approach appears to be applicable to all addition polyimide systems.

  1. Statistical Mechanics of Neural Networks

    NASA Astrophysics Data System (ADS)

    Rau, Albrecht

    1992-01-01

    Available from UMI in association with The British Library. Requires signed TDF. In this thesis we study neural networks using tools from the statistical mechanics of systems with quenched disorder. We apply these tools to two structurally different types of networks, feed-forward and feedback networks, whose properties we first review. After reviewing the use of feed-forward networks to infer unknown rules from sets of examples, we demonstrate how practical considerations can be incorporated into the analysis and how, as a consequence, existing learning theories have to be modified. To do so, we analyse the learning of rules which cannot be learnt perfectly due to constraints on the networks used. We present and analyse a model of multi-class classification and mention how it can be used. Finally we give an analytical treatment of a "learning by query" algorithm, for which the rule is extracted from queries which are not random but selected to increase the information gain. In this thesis feedback networks are used as associative memories. Our study centers on an analysis of specific features of the basins of attraction and the structure of weight space of optimized neural networks. We investigate the pattern selectivity of optimized networks, i.e. their ability to differentiate similar but distinct patterns, and show how the basins of attraction may be enlarged using external stimulus fields. Using a new method of analysis we study the weight space organization of optimized neural networks and show how the insights gained can be used to classify different groups of networks.

  2. Genetic analyses of captive Alala (Corvus hawaiiensis) using AFLP analyses

    USGS Publications Warehouse

    Jarvi, Susan I.; Bianchi, Kiara R.

    2006-01-01

    Population level studies of genetic diversity can provide information about population structure, individual genetic distinctiveness and former population size. They are especially important for rare and threatened species like the Alala, where they can be used to assess extinction risks and evolutionary potential. In an ideal situation multiple methods should be used to detect variation, and these methods should be comparable across studies. In this report, we discuss AFLP (Amplified Fragment Length Polymorphism) as a genetic approach for detecting variation in the Alala , describe our findings, and discuss these in relation to mtDNA and microsatellite data reported elsewhere in this same population. AFLP is a technique for DNA fingerprinting that has wide applications. Because little or no prior knowledge of the particular species is required to carry out this method of analysis, AFLP can be used universally across varied taxonomic groups. Within individuals, estimates of diversity or heterozygosity across genomes may be complex because levels of diversity differ between and among genes. One of the more traditional methods of estimating diversity employs the use of codominant markers such as microsatellites. Codominant markers detect each allele at a locus independently. Hence, one can readily distinguish heterozygotes from homozygotes, directly assess allele frequencies and calculate other population level statistics. Dominant markers (for example, AFLP) are scored as either present or absent (null) so heterozygotes cannot be directly distinguished from homozygotes. However, the presence or absence data can be converted to expected heterozygosity estimates which are comparable to those determined by codominant markers. High allelic diversity and heterozygosity inherent in microsatellites make them excellent tools for studies of wild populations and they have been used extensively. One limitation to the use of microsatellites is that heterozygosity estimates are

  3. Electrophilic addition of astatine

    SciTech Connect

    Norseev, Yu.V.; Vasaros, L.; Nhan, D.D.; Huan, N.K.

    1988-03-01

    It has been shown for the first time that astatine is capable of undergoing addition reactions to unsaturated hydrocarbons. A new compound of astatine, viz., ethylene astatohydrin, has been obtained, and its retention numbers of squalane, Apiezon, and tricresyl phosphate have been found. The influence of various factors on the formation of ethylene astatohydrin has been studied. It has been concluded on the basis of the results obtained that the univalent cations of astatine in an acidic medium is protonated hypoastatous acid.

  4. A Genome-Wide Association Analysis Reveals Epistatic Cancellation of Additive Genetic Variance for Root Length in Arabidopsis thaliana.

    PubMed

    Lachowiec, Jennifer; Shen, Xia; Queitsch, Christine; Carlborg, Örjan

    2015-01-01

    Efforts to identify loci underlying complex traits generally assume that most genetic variance is additive. Here, we examined the genetics of Arabidopsis thaliana root length and found that the genomic narrow-sense heritability for this trait in the examined population was statistically zero. The low amount of additive genetic variance that could be captured by the genome-wide genotypes likely explains why no associations to root length could be found using standard additive-model-based genome-wide association (GWA) approaches. However, as the broad-sense heritability for root length was significantly larger, and primarily due to epistasis, we also performed an epistatic GWA analysis to map loci contributing to the epistatic genetic variance. Four interacting pairs of loci were revealed, involving seven chromosomal loci that passed a standard multiple-testing corrected significance threshold. The genotype-phenotype maps for these pairs revealed epistasis that cancelled out the additive genetic variance, explaining why these loci were not detected in the additive GWA analysis. Small population sizes, such as in our experiment, increase the risk of identifying false epistatic interactions due to testing for associations with very large numbers of multi-marker genotypes in few phenotyped individuals. Therefore, we estimated the false-positive risk using a new statistical approach that suggested half of the associated pairs to be true positive associations. Our experimental evaluation of candidate genes within the seven associated loci suggests that this estimate is conservative; we identified functional candidate genes that affected root development in four loci that were part of three of the pairs. The statistical epistatic analyses were thus indispensable for confirming known, and identifying new, candidate genes for root length in this population of wild-collected A. thaliana accessions. We also illustrate how epistatic cancellation of the additive genetic variance

  5. A Genome-Wide Association Analysis Reveals Epistatic Cancellation of Additive Genetic Variance for Root Length in Arabidopsis thaliana

    PubMed Central

    Lachowiec, Jennifer; Shen, Xia; Queitsch, Christine; Carlborg, Örjan

    2015-01-01

    Efforts to identify loci underlying complex traits generally assume that most genetic variance is additive. Here, we examined the genetics of Arabidopsis thaliana root length and found that the genomic narrow-sense heritability for this trait in the examined population was statistically zero. The low amount of additive genetic variance that could be captured by the genome-wide genotypes likely explains why no associations to root length could be found using standard additive-model-based genome-wide association (GWA) approaches. However, as the broad-sense heritability for root length was significantly larger, and primarily due to epistasis, we also performed an epistatic GWA analysis to map loci contributing to the epistatic genetic variance. Four interacting pairs of loci were revealed, involving seven chromosomal loci that passed a standard multiple-testing corrected significance threshold. The genotype-phenotype maps for these pairs revealed epistasis that cancelled out the additive genetic variance, explaining why these loci were not detected in the additive GWA analysis. Small population sizes, such as in our experiment, increase the risk of identifying false epistatic interactions due to testing for associations with very large numbers of multi-marker genotypes in few phenotyped individuals. Therefore, we estimated the false-positive risk using a new statistical approach that suggested half of the associated pairs to be true positive associations. Our experimental evaluation of candidate genes within the seven associated loci suggests that this estimate is conservative; we identified functional candidate genes that affected root development in four loci that were part of three of the pairs. The statistical epistatic analyses were thus indispensable for confirming known, and identifying new, candidate genes for root length in this population of wild-collected A. thaliana accessions. We also illustrate how epistatic cancellation of the additive genetic variance

  6. Projections of Education Statistics to 2009.

    ERIC Educational Resources Information Center

    Gerald, Debra E.; Hussar, William J.

    This report includes statistics on elementary and secondary schools and institutions of higher education at the national level. Included are projections for enrollment, graduates, classroom teachers, and expenditures to the year 2008. In addition, the report includes projections of enrollment in public elementary and secondary schools and high…

  7. Florida Library Directory with Statistics, 1997.

    ERIC Educational Resources Information Center

    Florida Dept. of State, Tallahassee. Div. of Library and Information Services.

    This 48th annual edition includes listings for over 1,000 libraries of all types in Florida, with contact names, phone numbers, addresses, and e-mail and web addresses. In addition, there is a section of library statistics, showing data on the use, resources, and financial condition of Florida's libraries. The first section consists of listings…

  8. R.A. Fisher's contributions to genetical statistics.

    PubMed

    Thompson, E A

    1990-12-01

    R. A. Fisher (1890-1962) was a professor of genetics, and many of his statistical innovations found expression in the development of methodology in statistical genetics. However, whereas his contributions in mathematical statistics are easily identified, in population genetics he shares his preeminence with Sewall Wright (1889-1988) and J. B. S. Haldane (1892-1965). This paper traces some of Fisher's major contributions to the foundations of statistical genetics, and his interactions with Wright and with Haldane which contributed to the development of the subject. With modern technology, both statistical methodology and genetic data are changing. Nonetheless much of Fisher's work remains relevant, and may even serve as a foundation for future research in the statistical analysis of DNA data. For Fisher's work reflects his view of the role of statistics in scientific inference, expressed in 1949: There is no wide or urgent demand for people who will define methods of proof in set theory in the name of improving mathematical statistics. There is a widespread and urgent demand for mathematicians who understand that branch of mathematics known as theoretical statistics, but who are capable also of recognising situations in the real world to which such mathematics is applicable. In recognising features of the real world to which his models and analyses should be applicable, Fisher laid a lasting foundation for statistical inference in genetic analyses. PMID:2085639

  9. Functional Generalized Additive Models.

    PubMed

    McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David

    2014-01-01

    We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.

  10. Reexamination of Statistical Methods for Comparative Anatomy: Examples of Its Application and Comparisons with Other Parametric and Nonparametric Statistics.

    PubMed

    Aversi-Ferreira, Roqueline A G M F; Nishijo, Hisao; Aversi-Ferreira, Tales Alexandre

    2015-01-01

    Various statistical methods have been published for comparative anatomy. However, few studies compared parametric and nonparametric statistical methods. Moreover, some previous studies using statistical method for comparative anatomy (SMCA) proposed the formula for comparison of groups of anatomical structures (multiple structures) among different species. The present paper described the usage of SMCA and compared the results by SMCA with those by parametric test (t-test) and nonparametric analyses (cladistics) of anatomical data. In conclusion, the SMCA can offer a more exact and precise way to compare single and multiple anatomical structures across different species, which requires analyses of nominal features in comparative anatomy. PMID:26413553

  11. Reexamination of Statistical Methods for Comparative Anatomy: Examples of Its Application and Comparisons with Other Parametric and Nonparametric Statistics

    PubMed Central

    Aversi-Ferreira, Roqueline A. G. M. F.; Nishijo, Hisao; Aversi-Ferreira, Tales Alexandre

    2015-01-01

    Various statistical methods have been published for comparative anatomy. However, few studies compared parametric and nonparametric statistical methods. Moreover, some previous studies using statistical method for comparative anatomy (SMCA) proposed the formula for comparison of groups of anatomical structures (multiple structures) among different species. The present paper described the usage of SMCA and compared the results by SMCA with those by parametric test (t-test) and nonparametric analyses (cladistics) of anatomical data. In conclusion, the SMCA can offer a more exact and precise way to compare single and multiple anatomical structures across different species, which requires analyses of nominal features in comparative anatomy. PMID:26413553

  12. Malaria Diagnosis Using Automated Analysers: A Boon for Hematopathologists in Endemic Areas

    PubMed Central

    Narang, Vikram; Sood, Neena; Garg, Bhavna; Gupta, Vikram Kumar

    2015-01-01

    Background Haematological abnormalities are common in acute febrile tropical illnesses. Malaria is a major health problem in tropics. In endemic areas especially in the post monsoon season, it is not practical to manually screen all peripheral blood films (PBF) for malarial parasite. Automated analysers offer rapid, sensitive and cost effective screening of all samples. Aim The study was done to evaluate the usefulness of automated cell counters analysing their histograms, scatter-grams and the flaggings generated in malaria positive and negative cases. The comparison of other haematological parameters were also studied which could help to identify malaria parasite in peripheral blood smear. Materials and Methods The blood samples were analysed using Beckman coulter LH-750. The abnormal scatter grams and additional peaks in WBC histograms were observed diligently & compared with normal controls. Haematological abnormalities were also evaluated. Statistical Analysis Statistical analysis was done by using software Epi-Info version 7.1.4 freely available from CDC website. Fisher exact test was applied to calculate the p-value and value < 0.05 was considered as significant. Final identification of malarial parasite species was done independently by peripheral blood smear examination by two pathologists. Results Of all the 200 cases evaluated abnormal scatter grams were observed in all the cases of malaria while abnormal WBC histogram peaks were noted in 96% cases demonstrating a peak at the threshold of the histogram. The difference between number of slides positive for abnormal WBC scatter gram and abnormal WBC histogram peaks were statistically highly significant (p=0.007). So abnormal WBC scatter gram can better give idea of malarial parasite presence. Of the haematological parameters thrombocytopenia (92% cases) emerged as the strongest predictor of malaria. Conclusion It is recommended for haematopathologists to review the haematological data and the scatter plots

  13. Fragile entanglement statistics

    NASA Astrophysics Data System (ADS)

    Brody, Dorje C.; Hughston, Lane P.; Meier, David M.

    2015-10-01

    If X and Y are independent, Y and Z are independent, and so are X and Z, one might be tempted to conclude that X, Y, and Z are independent. But it has long been known in classical probability theory that, intuitive as it may seem, this is not true in general. In quantum mechanics one can ask whether analogous statistics can emerge for configurations of particles in certain types of entangled states. The explicit construction of such states, along with the specification of suitable sets of observables that have the purported statistical properties, is not entirely straightforward. We show that an example of such a configuration arises in the case of an N-particle GHZ state, and we are able to identify a family of observables with the property that the associated measurement outcomes are independent for any choice of 2,3,\\ldots ,N-1 of the particles, even though the measurement outcomes for all N particles are not independent. Although such states are highly entangled, the entanglement turns out to be ‘fragile’, i.e. the associated density matrix has the property that if one traces out the freedom associated with even a single particle, the resulting reduced density matrix is separable.

  14. Statistical clumped isotope signatures.

    PubMed

    Röckmann, T; Popa, M E; Krol, M C; Hofmann, M E G

    2016-08-18

    High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules.

  15. International petroleum statistics report

    SciTech Connect

    1997-05-01

    The International Petroleum Statistics Report is a monthly publication that provides current international oil data. This report is published for the use of Members of Congress, Federal agencies, State agencies, industry, and the general public. Publication of this report is in keeping with responsibilities given the Energy Information Administration in Public Law 95-91. The International Petroleum Statistics Report presents data on international oil production, demand, imports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1995; OECD stocks from 1973 through 1995; and OECD trade from 1985 through 1995.

  16. Statistical clumped isotope signatures

    PubMed Central

    Röckmann, T.; Popa, M. E.; Krol, M. C.; Hofmann, M. E. G.

    2016-01-01

    High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules. PMID:27535168

  17. [Comment on] Statistical discrimination

    NASA Astrophysics Data System (ADS)

    Chinn, Douglas

    In the December 8, 1981, issue of Eos, a news item reported the conclusion of a National Research Council study that sexual discrimination against women with Ph.D.'s exists in the field of geophysics. Basically, the item reported that even when allowances are made for motherhood the percentage of female Ph.D.'s holding high university and corporate positions is significantly lower than the percentage of male Ph.D.'s holding the same types of positions. The sexual discrimination conclusion, based only on these statistics, assumes that there are no basic psychological differences between men and women that might cause different populations in the employment group studied. Therefore, the reasoning goes, after taking into account possible effects from differences related to anatomy, such as women stopping their careers in order to bear and raise children, the statistical distributions of positions held by male and female Ph.D.'s ought to be very similar to one another. Any significant differences between the distributions must be caused primarily by sexual discrimination.

  18. Statistical clumped isotope signatures

    NASA Astrophysics Data System (ADS)

    Röckmann, T.; Popa, M. E.; Krol, M. C.; Hofmann, M. E. G.

    2016-08-01

    High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules.

  19. Statistical clumped isotope signatures.

    PubMed

    Röckmann, T; Popa, M E; Krol, M C; Hofmann, M E G

    2016-01-01

    High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules. PMID:27535168

  20. Sufficient Statistics: an Example

    NASA Technical Reports Server (NTRS)

    Quirein, J.

    1973-01-01

    The feature selection problem is considered resulting from the transformation x = Bz where B is a k by n matrix of rank k and k is or = to n. Such a transformation can be considered to reduce the dimension of each observation vector z, and in general, such a transformation results in a loss of information. In terms of the divergence, this information loss is expressed by the fact that the average divergence D sub B computed using variable x is less than or equal to the average divergence D computed using variable z. If D sub B = D, then B is said to be a sufficient statistic for the average divergence D. If B is a sufficient statistic for the average divergence, then it can be shown that the probability of misclassification computed using variable x (of dimension k is or = to n) is equal to the probability of misclassification computed using variable z. Also included is what is believed to be a new proof of the well known fact that D is or = to D sub B. Using the techniques necessary to prove the above fact, it is shown that the Brattacharyya distance as measured by variable x is less than or equal to the Brattacharyya distance as measured by variable z.

  1. FAME: Software for analysing rock microstructures

    NASA Astrophysics Data System (ADS)

    Hammes, Daniel M.; Peternell, Mark

    2016-05-01

    Determination of rock microstructures leads to a better understanding of the formation and deformation of polycrystalline solids. Here, we present FAME (Fabric Analyser based Microstructure Evaluation), an easy-to-use MATLAB®-based software for processing datasets recorded by an automated fabric analyser microscope. FAME is provided as a MATLAB®-independent Windows® executable with an intuitive graphical user interface. Raw data from the fabric analyser microscope can be automatically loaded, filtered and cropped before analysis. Accurate and efficient rock microstructure analysis is based on an advanced user-controlled grain labelling algorithm. The preview and testing environments simplify the determination of appropriate analysis parameters. Various statistic and plotting tools allow a graphical visualisation of the results such as grain size, shape, c-axis orientation and misorientation. The FAME2elle algorithm exports fabric analyser data to an elle (modelling software)-supported format. FAME supports batch processing for multiple thin section analysis or large datasets that are generated for example during 2D in-situ deformation experiments. The use and versatility of FAME is demonstrated on quartz and deuterium ice samples.

  2. Relationship between Graduate Students' Statistics Self-Efficacy, Statistics Anxiety, Attitude toward Statistics, and Social Support

    ERIC Educational Resources Information Center

    Perepiczka, Michelle; Chandler, Nichelle; Becerra, Michael

    2011-01-01

    Statistics plays an integral role in graduate programs. However, numerous intra- and interpersonal factors may lead to successful completion of needed coursework in this area. The authors examined the extent of the relationship between self-efficacy to learn statistics and statistics anxiety, attitude towards statistics, and social support of 166…

  3. A statistical development of entropy for the introductory physics course

    NASA Astrophysics Data System (ADS)

    Schoepf, David C.

    2002-02-01

    Many introductory physics texts introduce the statistical basis for the definition of entropy in addition to the Clausius definition, ΔS=q/T. We use a model based on equally spaced energy levels to present a way that the statistical definition of entropy can be developed at the introductory level. In addition to motivating the statistical definition of entropy, we also develop statistical arguments to answer the following questions: (i) Why does a system approach a state of maximum number of microstates? (ii) What is the equilibrium distribution of particles? (iii) What is the statistical basis of temperature? (iv) What is the statistical basis for the direction of spontaneous energy transfer? Finally, a correspondence between the statistical and the classical Clausius definitions of entropy is made.

  4. Dissociable behavioural outcomes of visual statistical learning

    PubMed Central

    Turk-Browne, Nicholas B.; Seitz, Aaron R.

    2016-01-01

    Statistical learning refers to the extraction of probabilistic relationships between stimuli and is increasingly used as a method to understand learning processes. However, numerous cognitive processes are sensitive to the statistical relationships between stimuli and any one measure of learning may conflate these processes; to date little research has focused on differentiating these processes. To understand how multiple processes underlie statistical learning, here we compared, within the same study, operational measures of learning from different tasks that may be differentially sensitive to these processes. In Experiment 1, participants were visually exposed to temporal regularities embedded in a stream of shapes. Their task was to periodically detect whether a shape, whose contrast was staircased to a threshold level, was present or absent. Afterwards, they completed a search task, where statistically predictable shapes were found more quickly. We used the search task to label shape pairs as “learned” or “non-learned”, and then used these labels to analyse the detection task. We found a dissociation between learning on the search task and the detection task where only non-learned pairs showed learning effects in the detection task. This finding was replicated in further experiments with recognition memory (Experiment 2) and associative learning tasks (Experiment 3). Taken together, these findings are consistent with the view that statistical learning may comprise a family of processes that can produce dissociable effects on different aspects of behaviour. PMID:27478399

  5. Machine learning, statistical learning and the future of biological research in psychiatry.

    PubMed

    Iniesta, R; Stahl, D; McGuffin, P

    2016-09-01

    Psychiatric research has entered the age of 'Big Data'. Datasets now routinely involve thousands of heterogeneous variables, including clinical, neuroimaging, genomic, proteomic, transcriptomic and other 'omic' measures. The analysis of these datasets is challenging, especially when the number of measurements exceeds the number of individuals, and may be further complicated by missing data for some subjects and variables that are highly correlated. Statistical learning-based models are a natural extension of classical statistical approaches but provide more effective methods to analyse very large datasets. In addition, the predictive capability of such models promises to be useful in developing decision support systems. That is, methods that can be introduced to clinical settings and guide, for example, diagnosis classification or personalized treatment. In this review, we aim to outline the potential benefits of statistical learning methods in clinical research. We first introduce the concept of Big Data in different environments. We then describe how modern statistical learning models can be used in practice on Big Datasets to extract relevant information. Finally, we discuss the strengths of using statistical learning in psychiatric studies, from both research and practical clinical points of view.

  6. Machine learning, statistical learning and the future of biological research in psychiatry.

    PubMed

    Iniesta, R; Stahl, D; McGuffin, P

    2016-09-01

    Psychiatric research has entered the age of 'Big Data'. Datasets now routinely involve thousands of heterogeneous variables, including clinical, neuroimaging, genomic, proteomic, transcriptomic and other 'omic' measures. The analysis of these datasets is challenging, especially when the number of measurements exceeds the number of individuals, and may be further complicated by missing data for some subjects and variables that are highly correlated. Statistical learning-based models are a natural extension of classical statistical approaches but provide more effective methods to analyse very large datasets. In addition, the predictive capability of such models promises to be useful in developing decision support systems. That is, methods that can be introduced to clinical settings and guide, for example, diagnosis classification or personalized treatment. In this review, we aim to outline the potential benefits of statistical learning methods in clinical research. We first introduce the concept of Big Data in different environments. We then describe how modern statistical learning models can be used in practice on Big Datasets to extract relevant information. Finally, we discuss the strengths of using statistical learning in psychiatric studies, from both research and practical clinical points of view. PMID:27406289

  7. Statistical and Computational Methods for High-Throughput Sequencing Data Analysis of Alternative Splicing

    PubMed Central

    2013-01-01

    The burgeoning field of high-throughput sequencing significantly improves our ability to understand the complexity of transcriptomes. Alternative splicing, as one of the most important driving forces for transcriptome diversity, can now be studied at an unprecedent resolution. Efficient and powerful computational and statistical methods are in urgent need to facilitate the characterization and quantification of alternative splicing events. Here we discuss methods in splice junction read mapping, and methods in exon-centric or isoform-centric quantification of alternative splicing. In addition, we discuss HITS-CLIP and splicing QTL analyses which are novel high-throughput sequencing based approaches in the dissection of splicing regulation. PMID:24058384

  8. MMOD: an R library for the calculation of population differentiation statistics.

    PubMed

    Winter, David J

    2012-11-01

    MMOD is a library for the R programming language that allows the calculation of the population differentiation measures D(est), G″(ST) and φ'(ST). R provides a powerful environment in which to conduct and record population genetic analyses but, at present, no R libraries provide functions for the calculation of these statistics from standard population genetic files. In addition to the calculation of differentiation measures, mmod can produce parametric bootstrap and jackknife samples of data sets for further analysis. By integrating with and complimenting the existing libraries adegenet and pegas, mmod extends the power of R as a population genetic platform. PMID:22883857

  9. Search Databases and Statistics: Pitfalls and Best Practices in Phosphoproteomics.

    PubMed

    Refsgaard, Jan C; Munk, Stephanie; Jensen, Lars J

    2016-01-01

    Advances in mass spectrometric instrumentation in the past 15 years have resulted in an explosion in the raw data yield from typical phosphoproteomics workflows. This poses the challenge of confidently identifying peptide sequences, localizing phosphosites to proteins and quantifying these from the vast amounts of raw data. This task is tackled by computational tools implementing algorithms that match the experimental data to databases, providing the user with lists for downstream analysis. Several platforms for such automated interpretation of mass spectrometric data have been developed, each having strengths and weaknesses that must be considered for the individual needs. These are reviewed in this chapter. Equally critical for generating highly confident output datasets is the application of sound statistical criteria to limit the inclusion of incorrect peptide identifications from database searches. Additionally, careful filtering and use of appropriate statistical tests on the output datasets affects the quality of all downstream analyses and interpretation of the data. Our considerations and general practices on these aspects of phosphoproteomics data processing are presented here. PMID:26584936

  10. Statistical Analysis of NAS Parallel Benchmarks and LINPACK Results

    NASA Technical Reports Server (NTRS)

    Meuer, Hans-Werner; Simon, Horst D.; Strohmeier, Erich; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    In the last three years extensive performance data have been reported for parallel machines both based on the NAS Parallel Benchmarks, and on LINPACK. In this study we have used the reported benchmark results and performed a number of statistical experiments using factor, cluster, and regression analyses. In addition to the performance results of LINPACK and the eight NAS parallel benchmarks, we have also included peak performance of the machine, and the LINPACK n and n(sub 1/2) values. Some of the results and observations can be summarized as follows: 1) All benchmarks are strongly correlated with peak performance. 2) LINPACK and EP have each a unique signature. 3) The remaining NPB can grouped into three groups as follows: (CG and IS), (LU and SP), and (MG, FT, and BT). Hence three (or four with EP) benchmarks are sufficient to characterize the overall NPB performance. Our poster presentation will follow a standard poster format, and will present the data of our statistical analysis in detail.

  11. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  12. Search Databases and Statistics: Pitfalls and Best Practices in Phosphoproteomics.

    PubMed

    Refsgaard, Jan C; Munk, Stephanie; Jensen, Lars J

    2016-01-01

    Advances in mass spectrometric instrumentation in the past 15 years have resulted in an explosion in the raw data yield from typical phosphoproteomics workflows. This poses the challenge of confidently identifying peptide sequences, localizing phosphosites to proteins and quantifying these from the vast amounts of raw data. This task is tackled by computational tools implementing algorithms that match the experimental data to databases, providing the user with lists for downstream analysis. Several platforms for such automated interpretation of mass spectrometric data have been developed, each having strengths and weaknesses that must be considered for the individual needs. These are reviewed in this chapter. Equally critical for generating highly confident output datasets is the application of sound statistical criteria to limit the inclusion of incorrect peptide identifications from database searches. Additionally, careful filtering and use of appropriate statistical tests on the output datasets affects the quality of all downstream analyses and interpretation of the data. Our considerations and general practices on these aspects of phosphoproteomics data processing are presented here.

  13. Statistics of entrance times

    NASA Astrophysics Data System (ADS)

    Talkner, Peter

    2003-03-01

    The statistical properties of discrete Markov processes are investigated in terms of entrance times. Simple relations are given for their density and higher order distributions. These quantities are used for introducing a generalized Rice phase and for characterizing the synchronization of a process with an external driving force. For the McNamara Wiesenfeld model of stochastic resonance parameter regions (spanned by the noise strength, driving frequency and strength) are identified in which the process is locked with the frequency of the external driving and in which the diffusion of the Rice phase becomes minimal. At the same time the Fano factor of the number of entrances per period of the driving force has a minimum.

  14. Statistical crack mechanics

    SciTech Connect

    Dienes, J.K.

    1983-01-01

    An alternative to the use of plasticity theory to characterize the inelastic behavior of solids is to represent the flaws by statistical methods. We have taken such an approach to study fragmentation because it offers a number of advantages. Foremost among these is that, by considering the effects of flaws, it becomes possible to address the underlying physics directly. For example, we have been able to explain why rocks exhibit large strain-rate effects (a consequence of the finite growth rate of cracks), why a spherical explosive imbedded in oil shale produces a cavity with a nearly square section (opening of bedding cracks) and why propellants may detonate following low-speed impact (a consequence of frictional hot spots).

  15. Conditional statistical model building

    NASA Astrophysics Data System (ADS)

    Hansen, Mads Fogtmann; Hansen, Michael Sass; Larsen, Rasmus

    2008-03-01

    We present a new statistical deformation model suited for parameterized grids with different resolutions. Our method models the covariances between multiple grid levels explicitly, and allows for very efficient fitting of the model to data on multiple scales. The model is validated on a data set consisting of 62 annotated MR images of Corpus Callosum. One fifth of the data set was used as a training set, which was non-rigidly registered to each other without a shape prior. From the non-rigidly registered training set a shape prior was constructed by performing principal component analysis on each grid level and using the results to construct a conditional shape model, conditioning the finer parameters with the coarser grid levels. The remaining shapes were registered with the constructed shape prior. The dice measures for the registration without prior and the registration with a prior were 0.875 +/- 0.042 and 0.8615 +/- 0.051, respectively.

  16. Statistical design controversy

    SciTech Connect

    Evans, L.S.; Hendrey, G.R.; Thompson, K.H.

    1985-02-01

    This article was in response to criticisms received by Evans, Hendrey, and Thompson that their article was biased because of omissions and misrepresentations. The authors contend that experimental designs having only one plot per treatment ''were, from the outset, not capable of differentiating between treatment effects and field-position effects,'' remains valid and is supported by decades of agronomic research. Several men, Irving, Troiano, and McCune thought of the article as a review of all studies of acidic rain effects on soybeans. It was not. The article was written over the concern of the comparisons which were being made among studies which purport to evaluate effects of acid deposition on field-grown crops, and implicitly assumes that all of the studies are of equal scientific value. They are not. Only experimental approaches that are well-focused and designed with appropriate agronomic and statistical procedures should be used for credible regional and national assessments of crop inventories. 12 references.

  17. BIG DATA AND STATISTICS

    PubMed Central

    Rossell, David

    2016-01-01

    Big Data brings unprecedented power to address scientific, economic and societal issues, but also amplifies the possibility of certain pitfalls. These include using purely data-driven approaches that disregard understanding the phenomenon under study, aiming at a dynamically moving target, ignoring critical data collection issues, summarizing or preprocessing the data inadequately and mistaking noise for signal. We review some success stories and illustrate how statistical principles can help obtain more reliable information from data. We also touch upon current challenges that require active methodological research, such as strategies for efficient computation, integration of heterogeneous data, extending the underlying theory to increasingly complex questions and, perhaps most importantly, training a new generation of scientists to develop and deploy these strategies. PMID:27722040

  18. Statistical physics ""Beyond equilibrium

    SciTech Connect

    Ecke, Robert E

    2009-01-01

    The scientific challenges of the 21st century will increasingly involve competing interactions, geometric frustration, spatial and temporal intrinsic inhomogeneity, nanoscale structures, and interactions spanning many scales. We will focus on a broad class of emerging problems that will require new tools in non-equilibrium statistical physics and that will find application in new material functionality, in predicting complex spatial dynamics, and in understanding novel states of matter. Our work will encompass materials under extreme conditions involving elastic/plastic deformation, competing interactions, intrinsic inhomogeneity, frustration in condensed matter systems, scaling phenomena in disordered materials from glasses to granular matter, quantum chemistry applied to nano-scale materials, soft-matter materials, and spatio-temporal properties of both ordinary and complex fluids.

  19. Wide Wide World of Statistics: International Statistics on the Internet.

    ERIC Educational Resources Information Center

    Foudy, Geraldine

    2000-01-01

    Explains how to find statistics on the Internet, especially international statistics. Discusses advantages over print sources, including convenience, currency of information, cost effectiveness, and value-added formatting; sources of international statistics; United Nations agencies; search engines and power searching; and evaluating sources. (LRW)

  20. Understanding Statistics and Statistics Education: A Chinese Perspective

    ERIC Educational Resources Information Center

    Shi, Ning-Zhong; He, Xuming; Tao, Jian

    2009-01-01

    In recent years, statistics education in China has made great strides. However, there still exists a fairly large gap with the advanced levels of statistics education in more developed countries. In this paper, we identify some existing problems in statistics education in Chinese schools and make some proposals as to how they may be overcome. We…

  1. Statistical Literacy: Developing a Youth and Adult Education Statistical Project

    ERIC Educational Resources Information Center

    Conti, Keli Cristina; Lucchesi de Carvalho, Dione

    2014-01-01

    This article focuses on the notion of literacy--general and statistical--in the analysis of data from a fieldwork research project carried out as part of a master's degree that investigated the teaching and learning of statistics in adult education mathematics classes. We describe the statistical context of the project that involved the…

  2. Foundations of Statistical Mechanics in Space Plasmas

    NASA Astrophysics Data System (ADS)

    Livadiotis, G.

    2014-12-01

    Systems at thermal equilibrium have their distribution function of particle velocities stabilized into a Maxwell distribution, which is connected with the classical framework of Boltzmann-Gibbs (BG) statistical mechanics. However, Maxwell distributions are rare in space plasmas; the vast majority of these plasmas reside at stationary states out of thermal equilibrium, which are described by kappa distributions. Kappa distributions do not embody BG statistics, but instead, they are connected with the generalized statistical framework of non-extensive statistical mechanics that offers a solid theoretical basis for describing particle systems like collisionless space plasmas. Through the statistical formulation of kappa distributions, basic thermodynamic variables like the temperature, thermal pressure, and entropy, become physically meaningful and determinable, similarly to their classical BG description at thermal equilibrium. In addition, useful formulations of kappa distributions were developed in order to describe multi-particle distributions, and particle systems with a non-zero potential energy. Finally, the variety of kappa distribution formulations and the proven tools of non-extensive statistical mechanics have been successfully applied to a numerous space plasmas throughout the heliosphere, from the inner heliosphere (e.g., the solar wind and planetary magnetospheres) to the outer heliosphere (e.g., the inner heliosheath) and beyond.

  3. NeuroVault.org: a web-based repository for collecting and sharing unthresholded statistical maps of the human brain.

    PubMed

    Gorgolewski, Krzysztof J; Varoquaux, Gael; Rivera, Gabriel; Schwarz, Yannick; Ghosh, Satrajit S; Maumet, Camille; Sochat, Vanessa V; Nichols, Thomas E; Poldrack, Russell A; Poline, Jean-Baptiste; Yarkoni, Tal; Margulies, Daniel S

    2015-01-01

    Here we present NeuroVault-a web based repository that allows researchers to store, share, visualize, and decode statistical maps of the human brain. NeuroVault is easy to use and employs modern web technologies to provide informative visualization of data without the need to install additional software. In addition, it leverages the power of the Neurosynth database to provide cognitive decoding of deposited maps. The data are exposed through a public REST API enabling other services and tools to take advantage of it. NeuroVault is a new resource for researchers interested in conducting meta- and coactivation analyses.

  4. NeuroVault.org: a web-based repository for collecting and sharing unthresholded statistical maps of the human brain

    PubMed Central

    Gorgolewski, Krzysztof J.; Varoquaux, Gael; Rivera, Gabriel; Schwarz, Yannick; Ghosh, Satrajit S.; Maumet, Camille; Sochat, Vanessa V.; Nichols, Thomas E.; Poldrack, Russell A.; Poline, Jean-Baptiste; Yarkoni, Tal; Margulies, Daniel S.

    2015-01-01

    Here we present NeuroVault—a web based repository that allows researchers to store, share, visualize, and decode statistical maps of the human brain. NeuroVault is easy to use and employs modern web technologies to provide informative visualization of data without the need to install additional software. In addition, it leverages the power of the Neurosynth database to provide cognitive decoding of deposited maps. The data are exposed through a public REST API enabling other services and tools to take advantage of it. NeuroVault is a new resource for researchers interested in conducting meta- and coactivation analyses. PMID:25914639

  5. Performance Boosting Additive

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Mainstream Engineering Corporation was awarded Phase I and Phase II contracts from Goddard Space Flight Center's Small Business Innovation Research (SBIR) program in early 1990. With support from the SBIR program, Mainstream Engineering Corporation has developed a unique low cost additive, QwikBoost (TM), that increases the performance of air conditioners, heat pumps, refrigerators, and freezers. Because of the energy and environmental benefits of QwikBoost, Mainstream received the Tibbetts Award at a White House Ceremony on October 16, 1997. QwikBoost was introduced at the 1998 International Air Conditioning, Heating, and Refrigeration Exposition. QwikBoost is packaged in a handy 3-ounce can (pressurized with R-134a) and will be available for automotive air conditioning systems in summer 1998.

  6. Sewage sludge additive

    NASA Technical Reports Server (NTRS)

    Kalvinskas, J. J.; Mueller, W. A.; Ingham, J. D. (Inventor)

    1980-01-01

    The additive is for a raw sewage treatment process of the type where settling tanks are used for the purpose of permitting the suspended matter in the raw sewage to be settled as well as to permit adsorption of the dissolved contaminants in the water of the sewage. The sludge, which settles down to the bottom of the settling tank is extracted, pyrolyzed and activated to form activated carbon and ash which is mixed with the sewage prior to its introduction into the settling tank. The sludge does not provide all of the activated carbon and ash required for adequate treatment of the raw sewage. It is necessary to add carbon to the process and instead of expensive commercial carbon, coal is used to provide the carbon supplement.

  7. Perspectives on Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Bourell, David L.

    2016-07-01

    Additive manufacturing (AM) has skyrocketed in visibility commercially and in the public sector. This article describes the development of this field from early layered manufacturing approaches of photosculpture, topography, and material deposition. Certain precursors to modern AM processes are also briefly described. The growth of the field over the last 30 years is presented. Included is the standard delineation of AM technologies into seven broad categories. The economics of AM part generation is considered, and the impacts of the economics on application sectors are described. On the basis of current trends, the future outlook will include a convergence of AM fabricators, mass-produced AM fabricators, enabling of topology optimization designs, and specialization in the AM legal arena. Long-term developments with huge impact are organ printing and volume-based printing.

  8. Sarks as additional fermions

    NASA Astrophysics Data System (ADS)

    Agrawal, Jyoti; Frampton, Paul H.; Jack Ng, Y.; Nishino, Hitoshi; Yasuda, Osamu

    1991-03-01

    An extension of the standard model is proposed. The gauge group is SU(2) X ⊗ SU(3) C ⊗ SU(2) S ⊗ U(1) Q, where all gauge symmetries are unbroken. The colour and electric charge are combined with SU(2) S which becomes strongly coupled at approximately 500 GeV and binds preons to form fermionic and vector bound states. The usual quarks and leptons are singlets under SU(2) X but additional fermions, called sarks. transform under it and the electroweak group. The present model explains why no more than three light quark-lepton families can exist. Neutral sark baryons, called narks, are candidates for the cosmological dark matter having the characteristics designed for WIMPS. Further phenomenological implications of sarks are analyzed i including electron-positron annihilation. Z 0 decay, flavor-changing neutral currents. baryon-number non-conservation, sarkonium and the neutron electric dipole moment.

  9. Statistical Modeling for Radiation Hardness Assurance

    NASA Technical Reports Server (NTRS)

    Ladbury, Raymond L.

    2014-01-01

    We cover the models and statistics associated with single event effects (and total ionizing dose), why we need them, and how to use them: What models are used, what errors exist in real test data, and what the model allows us to say about the DUT will be discussed. In addition, how to use other sources of data such as historical, heritage, and similar part and how to apply experience, physics, and expert opinion to the analysis will be covered. Also included will be concepts of Bayesian statistics, data fitting, and bounding rates.

  10. Statistical summary of daily values data and trend analysis of dissolved-solids data at National Stream Quality Accounting Network (NASQAN) stations

    USGS Publications Warehouse

    Wells, F.C.; Schertz, T.L.

    1983-01-01

    A statistical summary is provided of the available continuous and once-daily discharge, specific-conductance, dissolved oxygen , water temperature, and pH data collected at NASQAN stations during the 1973-81 water years and documents the period of record on which the statistical calculations were based. In addition, dissolved-solids data are examined by regression analyses to determine the relation between dissolved solids and specific conductance and to determine if long-term trends can be detected in dissolved-solids concentrations. Statistical summaries, regression equations expressing the relation between dissolved solids and specific conductance, and graphical presentations of trend analyses of dissolved solids are presented for 515 NASQAN stations in the United States, Canada, Guam, and Puerto Rico. 

  11. The Statistical Consulting Center for Astronomy (SCCA)

    NASA Technical Reports Server (NTRS)

    Akritas, Michael

    2001-01-01

    The process by which raw astronomical data acquisition is transformed into scientifically meaningful results and interpretation typically involves many statistical steps. Traditional astronomy limits itself to a narrow range of old and familiar statistical methods: means and standard deviations; least-squares methods like chi(sup 2) minimization; and simple nonparametric procedures such as the Kolmogorov-Smirnov tests. These tools are often inadequate for the complex problems and datasets under investigations, and recent years have witnessed an increased usage of maximum-likelihood, survival analysis, multivariate analysis, wavelet and advanced time-series methods. The Statistical Consulting Center for Astronomy (SCCA) assisted astronomers with the use of sophisticated tools, and to match these tools with specific problems. The SCCA operated with two professors of statistics and a professor of astronomy working together. Questions were received by e-mail, and were discussed in detail with the questioner. Summaries of those questions and answers leading to new approaches were posted on the Web (www.state.psu.edu/ mga/SCCA). In addition to serving individual astronomers, the SCCA established a Web site for general use that provides hypertext links to selected on-line public-domain statistical software and services. The StatCodes site (www.astro.psu.edu/statcodes) provides over 200 links in the areas of: Bayesian statistics; censored and truncated data; correlation and regression, density estimation and smoothing, general statistics packages and information; image analysis; interactive Web tools; multivariate analysis; multivariate clustering and classification; nonparametric analysis; software written by astronomers; spatial statistics; statistical distributions; time series analysis; and visualization tools. StatCodes has received a remarkable high and constant hit rate of 250 hits/week (over 10,000/year) since its inception in mid-1997. It is of interest to

  12. Heart Disease and Stroke Statistics

    MedlinePlus

    ... Nutrition (PDF) Obesity (PDF) Peripheral Artery Disease (PDF) ... statistics, please contact the American Heart Association National Center, Office of Science & Medicine at statistics@heart.org . Please direct all ...

  13. Muscular Dystrophy: Data and Statistics

    MedlinePlus

    ... Statistics Recommend on Facebook Tweet Share Compartir MD STAR net Data and Statistics The following data and ... research [ Read Article ] For more information on MD STAR net see Research and Tracking . Key Findings Feature ...

  14. Statistical methods in physical mapping

    SciTech Connect

    Nelson, D.O.

    1995-05-01

    One of the great success stories of modern molecular genetics has been the ability of biologists to isolate and characterize the genes responsible for serious inherited diseases like fragile X syndrome, cystic fibrosis and myotonic muscular dystrophy. This dissertation concentrates on constructing high-resolution physical maps. It demonstrates how probabilistic modeling and statistical analysis can aid molecular geneticists in the tasks of planning, execution, and evaluation of physical maps of chromosomes and large chromosomal regions. The dissertation is divided into six chapters. Chapter 1 provides an introduction to the field of physical mapping, describing the role of physical mapping in gene isolation and ill past efforts at mapping chromosomal regions. The next two chapters review and extend known results on predicting progress in large mapping projects. Such predictions help project planners decide between various approaches and tactics for mapping large regions of the human genome. Chapter 2 shows how probability models have been used in the past to predict progress in mapping projects. Chapter 3 presents new results, based on stationary point process theory, for progress measures for mapping projects based on directed mapping strategies. Chapter 4 describes in detail the construction of all initial high-resolution physical map for human chromosome 19. This chapter introduces the probability and statistical models involved in map construction in the context of a large, ongoing physical mapping project. Chapter 5 concentrates on one such model, the trinomial model. This chapter contains new results on the large-sample behavior of this model, including distributional results, asymptotic moments, and detection error rates. In addition, it contains an optimality result concerning experimental procedures based on the trinomial model. The last chapter explores unsolved problems and describes future work.

  15. Statistical assessment of biosimilar products.

    PubMed

    Chow, Shein-Chung; Liu, Jen-Pei

    2010-01-01

    Biological products or medicines are therapeutic agents that are produced using a living system or organism. Access to these life-saving biological products is limited because of their expensive costs. Patents on the early biological products will soon expire in the next few years. This allows other biopharmaceutical/biotech companies to manufacture the generic versions of the biological products, which are referred to as follow-on biological products by the U.S. Food and Drug Administration (FDA) or as biosimilar medicinal products by the European Medicine Agency (EMEA) of the European Union (EU). Competition of cost-effective follow-on biological products with equivalent efficacy and safety can cut down the costs and hence increase patients' access to the much-needed biological pharmaceuticals. Unlike for the conventional pharmaceuticals of small molecules, the complexity and heterogeneity of the molecular structure, complicated manufacturing process, different analytical methods, and possibility of severe immunogenicity reactions make evaluation of equivalence (similarity) between the biosimilar products and their corresponding innovator product a great challenge for both the scientific community and regulatory agencies. In this paper, we provide an overview of the current regulatory requirements for approval of biosimilar products. A review of current criteria for evaluation of bioequivalence for the traditional chemical generic products is provided. A detailed description of the differences between the biosimilar and chemical generic products is given with respect to size and structure, immunogenicity, product quality attributed, and manufacturing processes. In addition, statistical considerations including design criteria, fundamental biosimilar assumptions, and statistical methods are proposed. The possibility of using genomic data in evaluation of biosimilar products is also explored. PMID:20077246

  16. Thoughts About Theories and Statistics.

    PubMed

    Fawcett, Jacqueline

    2015-07-01

    The purpose of this essay is to share my ideas about the connection between theories and statistics. The essay content reflects my concerns about some researchers' and readers' apparent lack of clarity about what constitutes appropriate statistical testing and conclusions about the empirical adequacy of theories. The reciprocal relation between theories and statistics is emphasized and the conclusion is that statistics without direction from theory is no more than a hobby.

  17. NOx analyser interefence from alkenes

    NASA Astrophysics Data System (ADS)

    Bloss, W. J.; Alam, M. S.; Lee, J. D.; Vazquez, M.; Munoz, A.; Rodenas, M.

    2012-04-01

    Nitrogen oxides (NO and NO2, collectively NOx) are critical intermediates in atmospheric chemistry. NOx abundance controls the levels of the primary atmospheric oxidants OH, NO3 and O3, and regulates the ozone production which results from the degradation of volatile organic compounds. NOx are also atmospheric pollutants in their own right, and NO2 is commonly included in air quality objectives and regulations. In addition to their role in controlling ozone formation, NOx levels affect the production of other pollutants such as the lachrymator PAN, and the nitrate component of secondary aerosol particles. Consequently, accurate measurement of nitrogen oxides in the atmosphere is of major importance for understanding our atmosphere. The most widely employed approach for the measurement of NOx is chemiluminescent detection of NO2* from the NO + O3 reaction, combined with NO2 reduction by either a heated catalyst or photoconvertor. The reaction between alkenes and ozone is also chemiluminescent; therefore alkenes may contribute to the measured NOx signal, depending upon the instrumental background subtraction cycle employed. This interference has been noted previously, and indeed the effect has been used to measure both alkenes and ozone in the atmosphere. Here we report the results of a systematic investigation of the response of a selection of NOx analysers, ranging from systems used for routine air quality monitoring to atmospheric research instrumentation, to a series of alkenes ranging from ethene to the biogenic monoterpenes, as a function of conditions (co-reactants, humidity). Experiments were performed in the European Photoreactor (EUPHORE) to ensure common calibration, a common sample for the monitors, and to unequivocally confirm the alkene (via FTIR) and NO2 (via DOAS) levels present. The instrument responses ranged from negligible levels up to 10 % depending upon the alkene present and conditions used. Such interferences may be of substantial importance

  18. Critical analysis of adsorption data statistically

    NASA Astrophysics Data System (ADS)

    Kaushal, Achla; Singh, S. K.

    2016-09-01

    Experimental data can be presented, computed, and critically analysed in a different way using statistics. A variety of statistical tests are used to make decisions about the significance and validity of the experimental data. In the present study, adsorption was carried out to remove zinc ions from contaminated aqueous solution using mango leaf powder. The experimental data was analysed statistically by hypothesis testing applying t test, paired t test and Chi-square test to (a) test the optimum value of the process pH, (b) verify the success of experiment and (c) study the effect of adsorbent dose in zinc ion removal from aqueous solutions. Comparison of calculated and tabulated values of t and χ 2 showed the results in favour of the data collected from the experiment and this has been shown on probability charts. K value for Langmuir isotherm was 0.8582 and m value for Freundlich adsorption isotherm obtained was 0.725, both are <1, indicating favourable isotherms. Karl Pearson's correlation coefficient values for Langmuir and Freundlich adsorption isotherms were obtained as 0.99 and 0.95 respectively, which show higher degree of correlation between the variables. This validates the data obtained for adsorption of zinc ions from the contaminated aqueous solution with the help of mango leaf powder.

  19. Statistical Seismology and Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Tiampo, K. F.; González, P. J.; Kazemian, J.

    2014-12-01

    While seismicity triggered or induced by natural resources production such as mining or water impoundment in large dams has long been recognized, the recent increase in the unconventional production of oil and gas has been linked to rapid rise in seismicity in many places, including central North America (Ellsworth et al., 2012; Ellsworth, 2013). Worldwide, induced events of M~5 have occurred and, although rare, have resulted in both damage and public concern (Horton, 2012; Keranen et al., 2013). In addition, over the past twenty years, the increase in both number and coverage of seismic stations has resulted in an unprecedented ability to precisely record the magnitude and location of large numbers of small magnitude events. The increase in the number and type of seismic sequences available for detailed study has revealed differences in their statistics that previously difficult to quantify. For example, seismic swarms that produce significant numbers of foreshocks as well as aftershocks have been observed in different tectonic settings, including California, Iceland, and the East Pacific Rise (McGuire et al., 2005; Shearer, 2012; Kazemian et al., 2014). Similarly, smaller events have been observed prior to larger induced events in several occurrences from energy production. The field of statistical seismology has long focused on the question of triggering and the mechanisms responsible (Stein et al., 1992; Hill et al., 1993; Steacy et al., 2005; Parsons, 2005; Main et al., 2006). For example, in most cases the associated stress perturbations are much smaller than the earthquake stress drop, suggesting an inherent sensitivity to relatively small stress changes (Nalbant et al., 2005). Induced seismicity provides the opportunity to investigate triggering and, in particular, the differences between long- and short-range triggering. Here we investigate the statistics of induced seismicity sequences from around the world, including central North America and Spain, and

  20. Additive lattice kirigami

    PubMed Central

    Castle, Toen; Sussman, Daniel M.; Tanis, Michael; Kamien, Randall D.

    2016-01-01

    Kirigami uses bending, folding, cutting, and pasting to create complex three-dimensional (3D) structures from a flat sheet. In the case of lattice kirigami, this cutting and rejoining introduces defects into an underlying 2D lattice in the form of points of nonzero Gaussian curvature. A set of simple rules was previously used to generate a wide variety of stepped structures; we now pare back these rules to their minimum. This allows us to describe a set of techniques that unify a wide variety of cut-and-paste actions under the rubric of lattice kirigami, including adding new material and rejoining material across arbitrary cuts in the sheet. We also explore the use of more complex lattices and the different structures that consequently arise. Regardless of the choice of lattice, creating complex structures may require multiple overlapping kirigami cuts, where subsequent cuts are not performed on a locally flat lattice. Our additive kirigami method describes such cuts, providing a simple methodology and a set of techniques to build a huge variety of complex 3D shapes. PMID:27679822

  1. Additive lattice kirigami

    PubMed Central

    Castle, Toen; Sussman, Daniel M.; Tanis, Michael; Kamien, Randall D.

    2016-01-01

    Kirigami uses bending, folding, cutting, and pasting to create complex three-dimensional (3D) structures from a flat sheet. In the case of lattice kirigami, this cutting and rejoining introduces defects into an underlying 2D lattice in the form of points of nonzero Gaussian curvature. A set of simple rules was previously used to generate a wide variety of stepped structures; we now pare back these rules to their minimum. This allows us to describe a set of techniques that unify a wide variety of cut-and-paste actions under the rubric of lattice kirigami, including adding new material and rejoining material across arbitrary cuts in the sheet. We also explore the use of more complex lattices and the different structures that consequently arise. Regardless of the choice of lattice, creating complex structures may require multiple overlapping kirigami cuts, where subsequent cuts are not performed on a locally flat lattice. Our additive kirigami method describes such cuts, providing a simple methodology and a set of techniques to build a huge variety of complex 3D shapes.

  2. Springer Handbook of Engineering Statistics

    NASA Astrophysics Data System (ADS)

    Pham, Hoang

    The Springer Handbook of Engineering Statistics gathers together the full range of statistical techniques required by engineers from all fields to gain sensible statistical feedback on how their processes or products are functioning and to give them realistic predictions of how these could be improved.

  3. Statistical log analysis made practical

    SciTech Connect

    Mitchell, W.K.; Nelson, R.J. )

    1991-06-01

    This paper discusses the advantages of a statistical approach to log analysis. Statistical techniques use inverse methods to calculate formation parameters. The use of statistical techniques has been limited, however, by the complexity of the mathematics and lengthy computer time required to minimize traditionally used nonlinear equations.

  4. Invention Activities Support Statistical Reasoning

    ERIC Educational Resources Information Center

    Smith, Carmen Petrick; Kenlan, Kris

    2016-01-01

    Students' experiences with statistics and data analysis in middle school are often limited to little more than making and interpreting graphs. Although students may develop fluency in statistical procedures and vocabulary, they frequently lack the skills necessary to apply statistical reasoning in situations other than clear-cut textbook examples.…

  5. Explorations in Statistics: the Bootstrap

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fourth installment of Explorations in Statistics explores the bootstrap. The bootstrap gives us an empirical approach to estimate the theoretical variability among possible values of a sample statistic such as the…

  6. Teaching Statistics Online Using "Excel"

    ERIC Educational Resources Information Center

    Jerome, Lawrence

    2011-01-01

    As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…

  7. Statistics Anxiety and Instructor Immediacy

    ERIC Educational Resources Information Center

    Williams, Amanda S.

    2010-01-01

    The purpose of this study was to investigate the relationship between instructor immediacy and statistics anxiety. It was predicted that students receiving immediacy would report lower levels of statistics anxiety. Using a pretest-posttest-control group design, immediacy was measured using the Instructor Immediacy scale. Statistics anxiety was…

  8. Statistics: It's in the Numbers!

    ERIC Educational Resources Information Center

    Deal, Mary M.; Deal, Walter F., III

    2007-01-01

    Mathematics and statistics play important roles in peoples' lives today. A day hardly passes that they are not bombarded with many different kinds of statistics. As consumers they see statistical information as they surf the web, watch television, listen to their satellite radios, or even read the nutrition facts panel on a cereal box in the…

  9. Statistics of indistinguishable particles.

    PubMed

    Wittig, Curt

    2009-07-01

    The wave function of a system containing identical particles takes into account the relationship between a particle's intrinsic spin and its statistical property. Specifically, the exchange of two identical particles having odd-half-integer spin results in the wave function changing sign, whereas the exchange of two identical particles having integer spin is accompanied by no such sign change. This is embodied in a term (-1)(2s), which has the value +1 for integer s (bosons), and -1 for odd-half-integer s (fermions), where s is the particle spin. All of this is well-known. In the nonrelativistic limit, a detailed consideration of the exchange of two identical particles shows that exchange is accompanied by a 2pi reorientation that yields the (-1)(2s) term. The same bookkeeping is applicable to the relativistic case described by the proper orthochronous Lorentz group, because any proper orthochronous Lorentz transformation can be expressed as the product of spatial rotations and a boost along the direction of motion. PMID:19552474

  10. International petroleum statistics report

    SciTech Connect

    1996-05-01

    The International Petroleum Statistics Report presents data on international oil production, demand, imports, exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1995; OECD stocks from 1973 through 1995; and OECD trade from 1084 through 1994.

  11. International petroleum statistics report

    SciTech Connect

    1995-11-01

    The International Petroleum Statistics Report presents data on international oil production, demand, imports, exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1994; OECD stocks from 1973 through 1994; and OECD trade from 1984 through 1994.

  12. International petroleum statistics report

    SciTech Connect

    1995-07-27

    The International Petroleum Statistics Report presents data on international oil production, demand, imports, and exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1994; OECD stocks from 1973 through 1994; and OECD trade from 1984 through 1994.

  13. Topics in statistical mechanics

    SciTech Connect

    Elser, V.

    1984-05-01

    This thesis deals with four independent topics in statistical mechanics: (1) the dimer problem is solved exactly for a hexagonal lattice with general boundary using a known generating function from the theory of partitions. It is shown that the leading term in the entropy depends on the shape of the boundary; (2) continuum models of percolation and self-avoiding walks are introduced with the property that their series expansions are sums over linear graphs with intrinsic combinatorial weights and explicit dimension dependence; (3) a constrained SOS model is used to describe the edge of a simple cubic crystal. Low and high temperature results are derived as well as the detailed behavior near the crystal facet; (4) the microscopic model of the lambda-transition involving atomic permutation cycles is reexamined. In particular, a new derivation of the two-component field theory model of the critical behavior is presented. Results for a lattice model originally proposed by Kikuchi are extended with a high temperature series expansion and Monte Carlo simulation. 30 references.

  14. Statistical mechanics of nucleosomes

    NASA Astrophysics Data System (ADS)

    Chereji, Razvan V.

    Eukaryotic cells contain long DNA molecules (about two meters for a human cell) which are tightly packed inside the micrometric nuclei. Nucleosomes are the basic packaging unit of the DNA which allows this millionfold compactification. A longstanding puzzle is to understand the principles which allow cells to both organize their genomes into chromatin fibers in the crowded space of their nuclei, and also to keep the DNA accessible to many factors and enzymes. With the nucleosomes covering about three quarters of the DNA, their positions are essential because these influence which genes can be regulated by the transcription factors and which cannot. We study physical models which predict the genome-wide organization of the nucleosomes and also the relevant energies which dictate this organization. In the last five years, the study of chromatin knew many important advances. In particular, in the field of nucleosome positioning, new techniques of identifying nucleosomes and the competing DNA-binding factors appeared, as chemical mapping with hydroxyl radicals, ChIP-exo, among others, the resolution of the nucleosome maps increased by using paired-end sequencing, and the price of sequencing an entire genome decreased. We present a rigorous statistical mechanics model which is able to explain the recent experimental results by taking into account nucleosome unwrapping, competition between different DNA-binding proteins, and both the interaction between histones and DNA, and between neighboring histones. We show a series of predictions of our new model, all in agreement with the experimental observations.

  15. International petroleum statistics report

    SciTech Connect

    1997-07-01

    The International Petroleum Statistics Report is a monthly publication that provides current international data. The report presents data on international oil production, demand, imports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent 12 months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1996; OECD stocks from 1973 through 1996; and OECD trade from 1986 through 1996.

  16. International petroleum statistics report

    SciTech Connect

    1996-10-01

    The International Petroleum Statistics Report presents data on international oil production, demand, imports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. Word oil production and OECD demand data are for the years 1970 through 1995; OECD stocks from 1973 through 1995; and OECD trade from 1985 through 1995.

  17. A statistical mechanical problem?

    PubMed Central

    Costa, Tommaso; Ferraro, Mario

    2014-01-01

    The problem of deriving the processes of perception and cognition or the modes of behavior from states of the brain appears to be unsolvable in view of the huge numbers of elements involved. However, neural activities are not random, nor independent, but constrained to form spatio-temporal patterns, and thanks to these restrictions, which in turn are due to connections among neurons, the problem can at least be approached. The situation is similar to what happens in large physical ensembles, where global behaviors are derived by microscopic properties. Despite the obvious differences between neural and physical systems a statistical mechanics approach is almost inescapable, since dynamics of the brain as a whole are clearly determined by the outputs of single neurons. In this paper it will be shown how, starting from very simple systems, connectivity engenders levels of increasing complexity in the functions of the brain depending on specific constraints. Correspondingly levels of explanations must take into account the fundamental role of constraints and assign at each level proper model structures and variables, that, on one hand, emerge from outputs of the lower levels, and yet are specific, in that they ignore irrelevant details. PMID:25228891

  18. Statistical Mechanics of Zooplankton.

    PubMed

    Hinow, Peter; Nihongi, Ai; Strickler, J Rudi

    2015-01-01

    Statistical mechanics provides the link between microscopic properties of many-particle systems and macroscopic properties such as pressure and temperature. Observations of similar "microscopic" quantities exist for the motion of zooplankton, as well as many species of other social animals. Herein, we propose to take average squared velocities as the definition of the "ecological temperature" of a population under different conditions on nutrients, light, oxygen and others. We test the usefulness of this definition on observations of the crustacean zooplankton Daphnia pulicaria. In one set of experiments, D. pulicaria is infested with the pathogen Vibrio cholerae, the causative agent of cholera. We find that infested D. pulicaria under light exposure have a significantly greater ecological temperature, which puts them at a greater risk of detection by visual predators. In a second set of experiments, we observe D. pulicaria in cold and warm water, and in darkness and under light exposure. Overall, our ecological temperature is a good discriminator of the crustacean's swimming behavior.

  19. Statistical Mechanics of Zooplankton

    PubMed Central

    Hinow, Peter; Nihongi, Ai; Strickler, J. Rudi

    2015-01-01

    Statistical mechanics provides the link between microscopic properties of many-particle systems and macroscopic properties such as pressure and temperature. Observations of similar “microscopic” quantities exist for the motion of zooplankton, as well as many species of other social animals. Herein, we propose to take average squared velocities as the definition of the “ecological temperature” of a population under different conditions on nutrients, light, oxygen and others. We test the usefulness of this definition on observations of the crustacean zooplankton Daphnia pulicaria. In one set of experiments, D. pulicaria is infested with the pathogen Vibrio cholerae, the causative agent of cholera. We find that infested D. pulicaria under light exposure have a significantly greater ecological temperature, which puts them at a greater risk of detection by visual predators. In a second set of experiments, we observe D. pulicaria in cold and warm water, and in darkness and under light exposure. Overall, our ecological temperature is a good discriminator of the crustacean’s swimming behavior. PMID:26270537

  20. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI).

    PubMed

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non-expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI's robustness and sensitivity in capturing useful data relating to the students' conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills.

  1. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI)

    PubMed Central

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non–expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI’s robustness and sensitivity in capturing useful data relating to the students’ conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. PMID:26903497

  2. Gait patterns for crime fighting: statistical evaluation

    NASA Astrophysics Data System (ADS)

    Sulovská, Kateřina; Bělašková, Silvie; Adámek, Milan

    2013-10-01

    The criminality is omnipresent during the human history. Modern technology brings novel opportunities for identification of a perpetrator. One of these opportunities is an analysis of video recordings, which may be taken during the crime itself or before/after the crime. The video analysis can be classed as identification analyses, respectively identification of a person via externals. The bipedal locomotion focuses on human movement on the basis of their anatomical-physiological features. Nowadays, the human gait is tested by many laboratories to learn whether the identification via bipedal locomotion is possible or not. The aim of our study is to use 2D components out of 3D data from the VICON Mocap system for deep statistical analyses. This paper introduces recent results of a fundamental study focused on various gait patterns during different conditions. The study contains data from 12 participants. Curves obtained from these measurements were sorted, averaged and statistically tested to estimate the stability and distinctiveness of this biometrics. Results show satisfactory distinctness of some chosen points, while some do not embody significant difference. However, results presented in this paper are of initial phase of further deeper and more exacting analyses of gait patterns under different conditions.

  3. EEO Implications of Job Analyses.

    ERIC Educational Resources Information Center

    Lacy, D. Patrick, Jr.

    1979-01-01

    Discusses job analyses as they relate to the requirements of Title VII of the Civil Rights Act of 1964, the Equal Pay Act of 1963, and the Rehabilitation Act of 1973. Argues that job analyses can establish the job-relatedness of entrance requirements and aid in defenses against charges of discrimination. Journal availability: see EA 511 615.

  4. Global atmospheric circulation statistics, 1000-1 mb

    NASA Technical Reports Server (NTRS)

    Randel, William J.

    1992-01-01

    The atlas presents atmospheric general circulation statistics derived from twelve years (1979-90) of daily National Meteorological Center (NMC) operational geopotential height analyses; it is an update of a prior atlas using data over 1979-1986. These global analyses are available on pressure levels covering 1000-1 mb (approximately 0-50 km). The geopotential grids are a combined product of the Climate Analysis Center (which produces analyses over 70-1 mb) and operational NMC analyses (over 1000-100 mb). Balance horizontal winds and hydrostatic temperatures are derived from the geopotential fields.

  5. Statistical model and error analysis of a proposed audio fingerprinting algorithm

    NASA Astrophysics Data System (ADS)

    McCarthy, E. P.; Balado, F.; Silvestre, G. C. M.; Hurley, N. J.

    2006-01-01

    In this paper we present a statistical analysis of a particular audio fingerprinting method proposed by Haitsma et al.1 Due to the excellent robustness and synchronisation properties of this particular fingerprinting method, we would like to examine its performance for varying values of the parameters involved in the computation and ascertain its capabilities. For this reason, we pursue a statistical model of the fingerprint (also known as a hash, message digest or label). Initially we follow the work of a previous attempt made by Doets and Lagendijk 2-4 to obtain such a statistical model. By reformulating the representation of the fingerprint as a quadratic form, we present a model in which the parameters derived by Doets and Lagendijk may be obtained more easily. Furthermore, our model allows further insight into certain aspects of the behaviour of the fingerprinting algorithm not previously examined. Using our model, we then analyse the probability of error (P e) of the hash. We identify two particular error scenarios and obtain an expression for the probability of error in each case. We present three methods of varying accuracy to approximate P e following Gaussian noise addition to the signal of interest. We then analyse the probability of error following desynchronisation of the signal at the input of the hashing system and provide an approximation to P e for different parameters of the algorithm under varying degrees of desynchronisation.

  6. Petroleum statistics in France

    SciTech Connect

    De Saint Germain, H.; Lamiraux, C.

    1995-08-01

    33 oil companies, including Elf, Exxon, Agip, Conoco as well as Coparex, Enron, Hadson, Midland, Hunt, Canyon and Union Texas are present in oil and gas exploration and production in France. The production of oil and gas in France amounts to some 60,000 bopd of oil and 350 MMcfpd of marketed natural gas each year, which still accounts for 3.5% and 10% for French domestic needs, respectively. To date, 166 fields have been discovered, representing a total reserve of 3 billion bbl of crude oil and 13 trillion cf of raw gas. These fields are concentrated in two major onshore sedimentary basins of Mesozoic age, which are the Aquitaine basin and the Paris basin. The Aquitaine basin should be subdivided into two distinct domains: The Parentis basin where the largest field Parentis was discovered in 1954 with still production of about 3700 bopd of oil and where Les Arbouslers field, discovered at the end of 1991, is currently producing about 10,000 bopd of oil. The northern Pyrenees and their foreland, where the Lacq field, discovered in 1951, has produced about 7.7 tcf of gas since 1957, and is still producing 138 MMcfpd. In the Paris basin, the two large oil fields are Villeperclue discovered in 1982 by Triton and Total, and Chaunoy, discovered in 1983 by Essorep, which are still producing about 10,000 and 15,000 bopd, respectively. The last significantly sized discovery occurred in 1990 with Itteville by Elf Aquitaine which is currently producing 4,200 bopd. The poster shows statistical data related to the past 20 years of oil and gas exploration and production in France.

  7. Ideal statistically quasi Cauchy sequences

    NASA Astrophysics Data System (ADS)

    Savas, Ekrem; Cakalli, Huseyin

    2016-08-01

    An ideal I is a family of subsets of N, the set of positive integers which is closed under taking finite unions and subsets of its elements. A sequence (xk) of real numbers is said to be S(I)-statistically convergent to a real number L, if for each ɛ > 0 and for each δ > 0 the set { n ∈N :1/n | { k ≤n :| xk-L | ≥ɛ } | ≥δ } belongs to I. We introduce S(I)-statistically ward compactness of a subset of R, the set of real numbers, and S(I)-statistically ward continuity of a real function in the senses that a subset E of R is S(I)-statistically ward compact if any sequence of points in E has an S(I)-statistically quasi-Cauchy subsequence, and a real function is S(I)-statistically ward continuous if it preserves S(I)-statistically quasi-Cauchy sequences where a sequence (xk) is called to be S(I)-statistically quasi-Cauchy when (Δxk) is S(I)-statistically convergent to 0. We obtain results related to S(I)-statistically ward continuity, S(I)-statistically ward compactness, Nθ-ward continuity, and slowly oscillating continuity.

  8. Basic statistics in cell biology.

    PubMed

    Vaux, David L

    2014-01-01

    The physicist Ernest Rutherford said, "If your experiment needs statistics, you ought to have done a better experiment." Although this aphorism remains true for much of today's research in cell biology, a basic understanding of statistics can be useful to cell biologists to help in monitoring the conduct of their experiments, in interpreting the results, in presenting them in publications, and when critically evaluating research by others. However, training in statistics is often focused on the sophisticated needs of clinical researchers, psychologists, and epidemiologists, whose conclusions depend wholly on statistics, rather than the practical needs of cell biologists, whose experiments often provide evidence that is not statistical in nature. This review describes some of the basic statistical principles that may be of use to experimental biologists, but it does not cover the sophisticated statistics needed for papers that contain evidence of no other kind.

  9. Statistical genetics in traditionally cultivated crops.

    PubMed

    Artoisenet, Pierre; Minsart, Laure-Anne

    2014-11-01

    Traditional farming systems have attracted a lot of attention over the past decades as they have been recognized to supply an important component in the maintenance of the genetic diversity worldwide. A broad spectrum of traditionally managed crops has been studied to investigate how reproductive properties in combination with husbandry characteristics shape the genetic structure of the crops over time. However, traditional farms typically involve populations of small size whose genetic evolution is overwhelmed with statistic fluctuations inherent to the stochastic nature of the crossings. Hence there is generally no one-to-one mapping between crop properties and measured genotype data, and claims regarding crop properties on the basis of the observed genetic structure must be stated within a confidence level to be estimated by means of a dedicated statistical analysis. In this paper, we propose a comprehensive framework to carry out such statistical analyses. We illustrate the capabilities of our approach by applying it to crops of C. lanatus var. lanatus oleaginous type cultivated in Côte d׳Ivoire. While some properties such as the effective field size considerably evade the constraints from experimental data, others such as the mating system turn out to be characterized with a higher statistical significance. We discuss the importance of our approach for studies on traditionally cultivated crops in general. PMID:24992232

  10. Using R-Project for Free Statistical Analysis in Extension Research

    ERIC Educational Resources Information Center

    Mangiafico, Salvatore S.

    2013-01-01

    One option for Extension professionals wishing to use free statistical software is to use online calculators, which are useful for common, simple analyses. A second option is to use a free computing environment capable of performing statistical analyses, like R-project. R-project is free, cross-platform, powerful, and respected, but may be…

  11. Statistics of Statisticians: Critical Mass of Statistics and Operational Research Groups

    NASA Astrophysics Data System (ADS)

    Kenna, Ralph; Berche, Bertrand

    Using a recently developed model, inspired by mean field theory in statistical physics, and data from the UK's Research Assessment Exercise, we analyse the relationship between the qualities of statistics and operational research groups and the quantities of researchers in them. Similar to other academic disciplines, we provide evidence for a linear dependency of quality on quantity up to an upper critical mass, which is interpreted as the average maximum number of colleagues with whom a researcher can communicate meaningfully within a research group. The model also predicts a lower critical mass, which research groups should strive to achieve to avoid extinction. For statistics and operational research, the lower critical mass is estimated to be 9 ± 3. The upper critical mass, beyond which research quality does not significantly depend on group size, is 17 ± 6.

  12. Feed analyses and their interpretation.

    PubMed

    Hall, Mary Beth

    2014-11-01

    Compositional analysis is central to determining the nutritional value of feedstuffs for use in ration formulation. The utility of the values and how they should be used depends on how representative the feed subsample is, the nutritional relevance and analytical variability of the assays, and whether an analysis is suitable to be applied to a particular feedstuff. Commercial analyses presently available for carbohydrates, protein, and fats have improved nutritionally pertinent description of feed fractions. Factors affecting interpretation of feed analyses and the nutritional relevance and application of currently available analyses are discussed.

  13. Enhancement of ethylenethiourea recoveries in food analyses by addition of cysteine hydrochloride.

    PubMed

    Sack, C A

    1995-01-01

    The effectiveness of cysteine hydrochloride (Cys-HCl) as a preservative of ethylenethiourea (ETU) in product matrixes and during analysis was studied. ETU recoveries were adversely affected by certain product matrixes when fortified directly into the product. Recoveries in 8 selected food items were 0-92% when analyzed 30 min after fortification and 0-51% when analyzed after 24 h. When Cys-HCl was added to product prior to fortification, recoveries increased to 71-95% even after frozen storage for 2-4 weeks. Cys-HCl was added during analysis of 53 untreated items. Recoveries improved an average of 15% with Cys-HCl. Without Cys-HCl, recoveries were erratic (20-98%), but with Cys-HCl, recoveries were 68-113%. Other antioxidants (sodium sulfite, butylated hydroxyanisole, butylated hydroxytoluene, and vitamins A and C) also were evaluated as ETU preservatives. When lettuce was treated first with sodium sulfite and then fortified with ETU, recoveries averaged 86%; without sodium sulfite, they averaged 1%. The other antioxidants were less effective for preserving ETU in lettuce, giving only 8-46% recoveries. The effect of oxidizers (potassium bromate, sodium hypochlorite, and hydrogen peroxide) on ETU recovery was also determined. Recovery of ETU from a baby food product (pears and pineapple) was 82%; with oxidizers, recoveries were 0-8%.

  14. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

    PubMed Central

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

  15. Statistical Performances of Resistive Active Power Splitter

    NASA Astrophysics Data System (ADS)

    Lalléchère, Sébastien; Ravelo, Blaise; Thakur, Atul

    2016-03-01

    In this paper, the synthesis and sensitivity analysis of an active power splitter (PWS) is proposed. It is based on the active cell composed of a Field Effect Transistor in cascade with shunted resistor at the input and the output (resistive amplifier topology). The PWS uncertainty versus resistance tolerances is suggested by using stochastic method. Furthermore, with the proposed topology, we can control easily the device gain while varying a resistance. This provides useful tool to analyse the statistical sensitivity of the system in uncertain environment.

  16. Analyses of Transistor Punchthrough Failures

    NASA Technical Reports Server (NTRS)

    Nicolas, David P.

    1999-01-01

    The failure of two transistors in the Altitude Switch Assembly for the Solid Rocket Booster followed by two additional failures a year later presented a challenge to failure analysts. These devices had successfully worked for many years on numerous missions. There was no history of failures with this type of device. Extensive checks of the test procedures gave no indication for a source of the cause. The devices were manufactured more than twenty years ago and failure information on this lot date code was not readily available. External visual exam, radiography, PEID, and leak testing were performed with nominal results Electrical testing indicated nearly identical base-emitter and base-collector characteristics (both forward and reverse) with a low resistance short emitter to collector. These characteristics are indicative of a classic failure mechanism called punchthrough. In failure analysis punchthrough refers to an condition where a relatively low voltage pulse causes the device to conduct very hard producing localized areas of thermal runaway or "hot spots". At one or more of these hot spots, the excessive currents melt the silicon. Heavily doped emitter material diffuses through the base region to the collector forming a diffusion pipe shorting the emitter to base to collector. Upon cooling, an alloy junction forms between the pipe and the base region. Generally, the hot spot (punch-through site) is under the bond and no surface artifact is visible. The devices were delidded and the internal structures were examined microscopically. The gold emitter lead was melted on one device, but others had anomalies in the metallization around the in-tact emitter bonds. The SEM examination confirmed some anomalies to be cosmetic defects while other anomalies were artifacts of the punchthrough site. Subsequent to these analyses, the contractor determined that some irregular testing procedures occurred at the time of the failures heretofore unreported. These testing

  17. Radar Derived Spatial Statistics of Summer Rain. Volume 2; Data Reduction and Analysis

    NASA Technical Reports Server (NTRS)

    Konrad, T. G.; Kropfli, R. A.

    1975-01-01

    Data reduction and analysis procedures are discussed along with the physical and statistical descriptors used. The statistical modeling techniques are outlined and examples of the derived statistical characterization of rain cells in terms of the several physical descriptors are presented. Recommendations concerning analyses which can be pursued using the data base collected during the experiment are included.

  18. Statistics without Tears: Complex Statistics with Simple Arithmetic

    ERIC Educational Resources Information Center

    Smith, Brian

    2011-01-01

    One of the often overlooked aspects of modern statistics is the analysis of time series data. Modern introductory statistics courses tend to rush to probabilistic applications involving risk and confidence. Rarely does the first level course linger on such useful and fascinating topics as time series decomposition, with its practical applications…

  19. Characterizations of linear sufficient statistics

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Redner, R.; Decell, H. P., Jr.

    1976-01-01

    A necessary and sufficient condition is developed such that there exists a continous linear sufficient statistic T for a dominated collection of totally finite measures defined on the Borel field generated by the open sets of a Banach space X. In particular, corollary necessary and sufficient conditions are given so that there exists a rank K linear sufficient statistic T for any finite collection of probability measures having n-variate normal densities. In this case a simple calculation, involving only the population means and covariances, determines the smallest integer K for which there exists a rank K linear sufficient statistic T (as well as an associated statistic T itself).

  20. Anaerobic sludge digestion with a biocatalytic additive

    SciTech Connect

    Ghosh, S.; Henry, M.P.; Fedde, P.A.

    1982-01-01

    The objective of this research was to evaluate the effects of a lactobacillus additive an anaerobic sludge digestion under normal, variable, and overload operating conditions. The additive was a whey fermentation product of an acid-tolerant strain of Lactobacillus acidophilus fortified with CaCO/sub 3/, (NH/sub 4/)/sub 2/HPO/sub 4/, ferrous lactate, and lactic acid. The lactobacillus additive is multifunctional in nature and provides growth factors, metabolic intermediates, and enzymes needed for substrate degradation and cellular synthesis. The experimental work consisted of several pairs of parallel mesophilic (35/sup 0/C) digestion runs (control and test) conducted in five experimental phases. Baseline runs without the additive showed that the two experimental digesters had the same methane content, gas production rate (GPR), and ethane yield. The effect of the additive was to increase methane yield and GPR by about 5% (which was statistically significant) during digester operation at a loading rate (LR) of 3.2 kg VS/m/sup 3/-day and a hydraulic retention time (HRT) of 14 days. Data collected from the various experimental phases showed that the biochemical additive increased methane yield, gas production rate, and VS reduction, and decreased volatile acids accumulation. In addition, it enhanced digester buffer capacity and improved the fertilizer value and dewatering characteristics of the digested residue.