The Effects of Measurement Error on Statistical Models for Analyzing Change. Final Report.
ERIC Educational Resources Information Center
Dunivant, Noel
The results of six major projects are discussed including a comprehensive mathematical and statistical analysis of the problems caused by errors of measurement in linear models for assessing change. In a general matrix representation of the problem, several new analytic results are proved concerning the parameters which affect bias in…
ERIC Educational Resources Information Center
Takaria, J.; Rumahlatu, D.
2016-01-01
The focus of this study is to examine comprehensively statistical literacy and self-concept enhancement of elementary school student teacher through CPS-BML model in which this enhancement is measured through N-gain. The result of study indicate that the use of Collaborative Problem Solving Model assisted by literacy media (CPS-ALM) model…
Animal movement: Statistical models for telemetry data
Hooten, Mevin B.; Johnson, Devin S.; McClintock, Brett T.; Morales, Juan M.
2017-01-01
The study of animal movement has always been a key element in ecological science, because it is inherently linked to critical processes that scale from individuals to populations and communities to ecosystems. Rapid improvements in biotelemetry data collection and processing technology have given rise to a variety of statistical methods for characterizing animal movement. The book serves as a comprehensive reference for the types of statistical models used to study individual-based animal movement.
Statistical analysis and model validation of automobile emissions
DOT National Transportation Integrated Search
2000-09-01
The article discusses the development of a comprehensive modal emissions model that is currently being integrated with a variety of transportation models as part of National Cooperative Highway Research Program project 25-11. Described is the second-...
Functional annotation of regulatory pathways.
Pandey, Jayesh; Koyutürk, Mehmet; Kim, Yohan; Szpankowski, Wojciech; Subramaniam, Shankar; Grama, Ananth
2007-07-01
Standardized annotations of biomolecules in interaction networks (e.g. Gene Ontology) provide comprehensive understanding of the function of individual molecules. Extending such annotations to pathways is a critical component of functional characterization of cellular signaling at the systems level. We propose a framework for projecting gene regulatory networks onto the space of functional attributes using multigraph models, with the objective of deriving statistically significant pathway annotations. We first demonstrate that annotations of pairwise interactions do not generalize to indirect relationships between processes. Motivated by this result, we formalize the problem of identifying statistically overrepresented pathways of functional attributes. We establish the hardness of this problem by demonstrating the non-monotonicity of common statistical significance measures. We propose a statistical model that emphasizes the modularity of a pathway, evaluating its significance based on the coupling of its building blocks. We complement the statistical model by an efficient algorithm and software, Narada, for computing significant pathways in large regulatory networks. Comprehensive results from our methods applied to the Escherichia coli transcription network demonstrate that our approach is effective in identifying known, as well as novel biological pathway annotations. Narada is implemented in Java and is available at http://www.cs.purdue.edu/homes/jpandey/narada/.
Ford, M E; Kallen, M; Richardson, P; Matthiesen, E; Cox, V; Teng, E J; Cook, K F; Petersen, N J
2008-01-01
To evaluate the effects of social support on comprehension and recall of consent form information in a study of Parkinson disease patients and their caregivers. Comparison of comprehension and recall outcomes among participants who read and signed the consent form accompanied by a family member/friend versus those of participants who read and signed the consent form unaccompanied. Comprehension and recall of consent form information were measured at one week and one month respectively, using Part A of the Quality of Informed Consent Questionnaire (QuIC). The mean age of the sample of 143 participants was 71 years (SD = 8.6 years). Analysis of covariance was used to compare QuIC scores between the intervention group (n = 70) and control group (n = 73). In the 1-week model, no statistically significant intervention effect was found (p = 0.860). However, the intervention status by patient status interaction was statistically significant (p = 0.012). In the 1-month model, no statistically significant intervention effect was found (p = 0.480). Again, however, the intervention status by patient status interaction was statistically significant (p = 0.040). At both time periods, intervention group patients scored higher (better) on the QuIC than did intervention group caregivers, and control group patients scored lower (worse) on the QuIC than did control group caregivers. Social support played a significant role in enhancing comprehension and recall of consent form information among patients.
Statistics and classification of the microwave zebra patterns associated with solar flares
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Baolin; Tan, Chengming; Zhang, Yin
2014-01-10
The microwave zebra pattern (ZP) is the most interesting, intriguing, and complex spectral structure frequently observed in solar flares. A comprehensive statistical study will certainly help us to understand the formation mechanism, which is not exactly clear now. This work presents a comprehensive statistical analysis of a big sample with 202 ZP events collected from observations at the Chinese Solar Broadband Radio Spectrometer at Huairou and the Ondŕejov Radiospectrograph in the Czech Republic at frequencies of 1.00-7.60 GHz from 2000 to 2013. After investigating the parameter properties of ZPs, such as the occurrence in flare phase, frequency range, polarization degree,more » duration, etc., we find that the variation of zebra stripe frequency separation with respect to frequency is the best indicator for a physical classification of ZPs. Microwave ZPs can be classified into three types: equidistant ZPs, variable-distant ZPs, and growing-distant ZPs, possibly corresponding to mechanisms of the Bernstein wave model, whistler wave model, and double plasma resonance model, respectively. This statistical classification may help us to clarify the controversies between the existing various theoretical models and understand the physical processes in the source regions.« less
Multiple commodities in statistical microeconomics: Model and market
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Yu, Miao; Du, Xin
2016-11-01
A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.
Krieger, Janice L; Neil, Jordan M; Strekalova, Yulia A; Sarge, Melanie A
2017-03-01
Improving informed consent to participate in randomized clinical trials (RCTs) is a key challenge in cancer communication. The current study examines strategies for enhancing randomization comprehension among patients with diverse levels of health literacy and identifies cognitive and affective predictors of intentions to participate in cancer RCTs. Using a post-test-only experimental design, cancer patients (n = 500) were randomly assigned to receive one of three message conditions for explaining randomization (ie, plain language condition, gambling metaphor, benign metaphor) or a control message. All statistical tests were two-sided. Health literacy was a statistically significant moderator of randomization comprehension (P = .03). Among participants with the lowest levels of health literacy, the benign metaphor resulted in greater comprehension of randomization as compared with plain language (P = .04) and control (P = .004) messages. Among participants with the highest levels of health literacy, the gambling metaphor resulted in greater randomization comprehension as compared with the benign metaphor (P = .04). A serial mediation model showed a statistically significant negative indirect effect of comprehension on behavioral intention through personal relevance of RCTs and anxiety associated with participation in RCTs (P < .001). The effectiveness of metaphors for explaining randomization depends on health literacy, with a benign metaphor being particularly effective for patients at the lower end of the health literacy spectrum. The theoretical model demonstrates the cognitive and affective predictors of behavioral intention to participate in cancer RCTs and offers guidance on how future research should employ communication strategies to improve the informed consent processes. © The Author 2016. Published by Oxford University Press.
Neil, Jordan M.; Strekalova, Yulia A.; Sarge, Melanie A.
2017-01-01
Abstract Background: Improving informed consent to participate in randomized clinical trials (RCTs) is a key challenge in cancer communication. The current study examines strategies for enhancing randomization comprehension among patients with diverse levels of health literacy and identifies cognitive and affective predictors of intentions to participate in cancer RCTs. Methods: Using a post-test-only experimental design, cancer patients (n = 500) were randomly assigned to receive one of three message conditions for explaining randomization (ie, plain language condition, gambling metaphor, benign metaphor) or a control message. All statistical tests were two-sided. Results: Health literacy was a statistically significant moderator of randomization comprehension (P = .03). Among participants with the lowest levels of health literacy, the benign metaphor resulted in greater comprehension of randomization as compared with plain language (P = .04) and control (P = .004) messages. Among participants with the highest levels of health literacy, the gambling metaphor resulted in greater randomization comprehension as compared with the benign metaphor (P = .04). A serial mediation model showed a statistically significant negative indirect effect of comprehension on behavioral intention through personal relevance of RCTs and anxiety associated with participation in RCTs (P < .001). Conclusions: The effectiveness of metaphors for explaining randomization depends on health literacy, with a benign metaphor being particularly effective for patients at the lower end of the health literacy spectrum. The theoretical model demonstrates the cognitive and affective predictors of behavioral intention to participate in cancer RCTs and offers guidance on how future research should employ communication strategies to improve the informed consent processes. PMID:27794035
AGARD Bulletin. Technical Programme, 1981.
1980-08-01
ionospheric effect models and their associated codes. Physical, statistical , and nybrid models will be described in a comprehensive manner. Descriptions...will be to review: The various conventional modes of optical correction required either by ametropias or by normal or pathological drops in visual
Process and representation in graphical displays
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Lewis, Robert; Rudisill, Marianne
1993-01-01
Our initial model of graphic comprehension has focused on statistical graphs. Like other models of human-computer interaction, models of graphical comprehension can be used by human-computer interface designers and developers to create interfaces that present information in an efficient and usable manner. Our investigation of graph comprehension addresses two primary questions: how do people represent the information contained in a data graph?; and how do they process information from the graph? The topics of focus for graphic representation concern the features into which people decompose a graph and the representations of the graph in memory. The issue of processing can be further analyzed as two questions: what overall processing strategies do people use?; and what are the specific processing skills required?
ERIC Educational Resources Information Center
Oslund, Eric L.; Clemens, Nathan H.; Simmons, Deborah C.; Simmons, Leslie E.
2018-01-01
The current study examined statistically significant differences between struggling and adequate readers using a multicomponent model of reading comprehension in 796 sixth through eighth graders, with a primary focus on word reading and vocabulary. Path analyses and Wald tests were used to investigate the direct and indirect relations of word…
Schenker, Victoria J.; Petrill, Stephen A.
2015-01-01
This study investigated the genetic and environmental influences on observed associations between listening comprehension, reading motivation, and reading comprehension. Univariate and multivariate quantitative genetic models were conducted in a sample of 284 pairs of twins at a mean age of 9.81 years. Genetic and nonshared environmental factors accounted for statistically significant variance in listening and reading comprehension, and nonshared environmental factors accounted for variance in reading motivation. Furthermore, listening comprehension demonstrated unique genetic and nonshared environmental influences but also had overlapping genetic influences with reading comprehension. Reading motivation and reading comprehension each had unique and overlapping nonshared environmental contributions. Therefore, listening comprehension appears to be related to reading primarily due to genetic factors whereas motivation appears to affect reading via child-specific, nonshared environmental effects. PMID:26321677
Schenker, Victoria J; Petrill, Stephen A
2015-01-01
This study investigated the genetic and environmental influences on observed associations between listening comprehension, reading motivation, and reading comprehension. Univariate and multivariate quantitative genetic models were conducted in a sample of 284 pairs of twins at a mean age of 9.81 years. Genetic and nonshared environmental factors accounted for statistically significant variance in listening and reading comprehension, and nonshared environmental factors accounted for variance in reading motivation. Furthermore, listening comprehension demonstrated unique genetic and nonshared environmental influences but also had overlapping genetic influences with reading comprehension. Reading motivation and reading comprehension each had unique and overlapping nonshared environmental contributions. Therefore, listening comprehension appears to be related to reading primarily due to genetic factors whereas motivation appears to affect reading via child-specific, nonshared environmental effects. Copyright © 2015 Elsevier Inc. All rights reserved.
A new computer code for discrete fracture network modelling
NASA Astrophysics Data System (ADS)
Xu, Chaoshui; Dowd, Peter
2010-03-01
The authors describe a comprehensive software package for two- and three-dimensional stochastic rock fracture simulation using marked point processes. Fracture locations can be modelled by a Poisson, a non-homogeneous, a cluster or a Cox point process; fracture geometries and properties are modelled by their respective probability distributions. Virtual sampling tools such as plane, window and scanline sampling are included in the software together with a comprehensive set of statistical tools including histogram analysis, probability plots, rose diagrams and hemispherical projections. The paper describes in detail the theoretical basis of the implementation and provides a case study in rock fracture modelling to demonstrate the application of the software.
Statistical models of lunar rocks and regolith
NASA Technical Reports Server (NTRS)
Marcus, A. H.
1973-01-01
The mathematical, statistical, and computational approaches used in the investigation of the interrelationship of lunar fragmental material, regolith, lunar rocks, and lunar craters are described. The first two phases of the work explored the sensitivity of the production model of fragmental material to mathematical assumptions, and then completed earlier studies on the survival of lunar surface rocks with respect to competing processes. The third phase combined earlier work into a detailed statistical analysis and probabilistic model of regolith formation by lithologically distinct layers, interpreted as modified crater ejecta blankets. The fourth phase of the work dealt with problems encountered in combining the results of the entire project into a comprehensive, multipurpose computer simulation model for the craters and regolith. Highlights of each phase of research are given.
Fuchs, Douglas; Compton, Donald L.; Fuchs, Lynn S.; Bryant, V. Joan; Hamlett, Carol L.; Lambert, Warren
2012-01-01
In a sample of 195 first graders selected for poor reading performance, the authors explored four cognitive predictors of later reading comprehension and reading disability (RD) status. In fall of first grade, the authors measured the children’s phonological processing, rapid automatized naming (RAN), oral language comprehension, and nonverbal reasoning. Throughout first grade, they also modeled the students’ reading progress by means of weekly Word Identification Fluency (WIF) tests to derive December and May intercepts. The authors assessed their reading comprehension in the spring of Grades 1–5. With the four cognitive variables and the WIF December intercept as predictors, 50.3% of the variance in fifth-grade reading comprehension was explained: 52.1% of this 50.3% was unique to the cognitive variables, 13.1% to the WIF December intercept, and 34.8% was shared. All five predictors were statistically significant. The same four cognitive variables with the May (rather than December) WIF intercept produced a model that explained 62.1% of the variance. Of this amount, the cognitive variables and May WIF intercept accounted for 34.5% and 27.7%, respectively; they shared 37.8%. All predictors in this model were statistically significant except RAN. Logistic regression analyses indicated that the accuracy with which the cognitive variables predicted end-of-fifth-grade RD status was 73.9%. The May WIF intercept contributed reliably to this prediction; the December WIF intercept did not. Results are discussed in terms of a role for cognitive abilities in identifying, classifying, and instructing students with severe reading problems. PMID:22539057
Fuchs, Douglas; Compton, Donald L; Fuchs, Lynn S; Bryant, V Joan; Hamlett, Carol L; Lambert, Warren
2012-01-01
In a sample of 195 first graders selected for poor reading performance, the authors explored four cognitive predictors of later reading comprehension and reading disability (RD) status. In fall of first grade, the authors measured the children's phonological processing, rapid automatized naming (RAN), oral language comprehension, and nonverbal reasoning. Throughout first grade, they also modeled the students' reading progress by means of weekly Word Identification Fluency (WIF) tests to derive December and May intercepts. The authors assessed their reading comprehension in the spring of Grades 1-5. With the four cognitive variables and the WIF December intercept as predictors, 50.3% of the variance in fifth-grade reading comprehension was explained: 52.1% of this 50.3% was unique to the cognitive variables, 13.1% to the WIF December intercept, and 34.8% was shared. All five predictors were statistically significant. The same four cognitive variables with the May (rather than December) WIF intercept produced a model that explained 62.1% of the variance. Of this amount, the cognitive variables and May WIF intercept accounted for 34.5% and 27.7%, respectively; they shared 37.8%. All predictors in this model were statistically significant except RAN. Logistic regression analyses indicated that the accuracy with which the cognitive variables predicted end-of-fifth-grade RD status was 73.9%. The May WIF intercept contributed reliably to this prediction; the December WIF intercept did not. Results are discussed in terms of a role for cognitive abilities in identifying, classifying, and instructing students with severe reading problems.
Kowall, Bernd; Rathmann, Wolfgang; Giani, Guido; Schipf, Sabine; Baumeister, Sebastian; Wallaschofski, Henri; Nauck, Matthias; Völzke, Henry
2013-04-01
Random glucose is widely used in routine clinical practice. We investigated whether this non-standardized glycemic measure is useful for individual diabetes prediction. The Study of Health in Pomerania (SHIP), a population-based cohort study in north-east Germany, included 3107 diabetes-free persons aged 31-81 years at baseline in 1997-2001. 2475 persons participated at 5-year follow-up and gave self-reports of incident diabetes. For the total sample and for subjects aged ≥50 years, statistical properties of prediction models with and without random glucose were compared. A basic model (including age, sex, diabetes of parents, hypertension and waist circumference) and a comprehensive model (additionally including various lifestyle variables and blood parameters, but not HbA1c) performed statistically significantly better after adding random glucose (e.g., the area under the receiver-operating curve (AROC) increased from 0.824 to 0.856 after adding random glucose to the comprehensive model in the total sample). Likewise, adding random glucose to prediction models which included HbA1c led to significant improvements of predictive ability (e.g., for subjects ≥50 years, AROC increased from 0.824 to 0.849 after adding random glucose to the comprehensive model+HbA1c). Random glucose is useful for individual diabetes prediction, and improves prediction models including HbA1c. Copyright © 2012 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.
Verbal Neuropsychological Functions in Aphasia: An Integrative Model
ERIC Educational Resources Information Center
Vigliecca, Nora Silvana; Báez, Sandra
2015-01-01
A theoretical framework which considers the verbal functions of the brain under a multivariate and comprehensive cognitive model was statistically analyzed. A confirmatory factor analysis was performed to verify whether some recognized aphasia constructs can be hierarchically integrated as latent factors from a homogenously verbal test. The Brief…
Improving DHH students' grammar through an individualized software program.
Cannon, Joanna E; Easterbrooks, Susan R; Gagné, Phill; Beal-Alvarez, Jennifer
2011-01-01
The purpose of this study was to determine if the frequent use of a targeted, computer software grammar instruction program, used as an individualized classroom activity, would influence the comprehension of morphosyntax structures (determiners, tense, and complementizers) in deaf/hard-of-hearing (DHH) participants who use American Sign Language (ASL). Twenty-six students from an urban day school for the deaf participated in this study. Two hierarchical linear modeling growth curve analyses showed that the influence of LanguageLinks: Syntax Assessment and Intervention (LL) resulted in statistically significant gains in participants' comprehension of morphosyntax structures. Two dependent t tests revealed statistically significant results between the pre- and postintervention assessments on the Diagnostic Evaluation of Language Variation-Norm Referenced. The daily use of LL increased the morphosyntax comprehension of the participants in this study and may be a promising practice for DHH students who use ASL.
Differences in Performance Among Test Statistics for Assessing Phylogenomic Model Adequacy.
Duchêne, David A; Duchêne, Sebastian; Ho, Simon Y W
2018-05-18
Statistical phylogenetic analyses of genomic data depend on models of nucleotide or amino acid substitution. The adequacy of these substitution models can be assessed using a number of test statistics, allowing the model to be rejected when it is found to provide a poor description of the evolutionary process. A potentially valuable use of model-adequacy test statistics is to identify when data sets are likely to produce unreliable phylogenetic estimates, but their differences in performance are rarely explored. We performed a comprehensive simulation study to identify test statistics that are sensitive to some of the most commonly cited sources of phylogenetic estimation error. Our results show that, for many test statistics, traditional thresholds for assessing model adequacy can fail to reject the model when the phylogenetic inferences are inaccurate and imprecise. This is particularly problematic when analysing loci that have few variable informative sites. We propose new thresholds for assessing substitution model adequacy and demonstrate their effectiveness in analyses of three phylogenomic data sets. These thresholds lead to frequent rejection of the model for loci that yield topological inferences that are imprecise and are likely to be inaccurate. We also propose the use of a summary statistic that provides a practical assessment of overall model adequacy. Our approach offers a promising means of enhancing model choice in genome-scale data sets, potentially leading to improvements in the reliability of phylogenomic inference.
ERIC Educational Resources Information Center
1971
Computers have effected a comprehensive transformation of chemistry. Computers have greatly enhanced the chemist's ability to do model building, simulations, data refinement and reduction, analysis of data in terms of models, on-line data logging, automated control of experiments, quantum chemistry and statistical and mechanical calculations, and…
A virtual climate library of surface temperature over North America for 1979-2015
NASA Astrophysics Data System (ADS)
Kravtsov, Sergey; Roebber, Paul; Brazauskas, Vytaras
2017-10-01
The most comprehensive continuous-coverage modern climatic data sets, known as reanalyses, come from combining state-of-the-art numerical weather prediction (NWP) models with diverse available observations. These reanalysis products estimate the path of climate evolution that actually happened, and their use in a probabilistic context—for example, to document trends in extreme events in response to climate change—is, therefore, limited. Free runs of NWP models without data assimilation can in principle be used for the latter purpose, but such simulations are computationally expensive and are prone to systematic biases. Here we produce a high-resolution, 100-member ensemble simulation of surface atmospheric temperature over North America for the 1979-2015 period using a comprehensive spatially extended non-stationary statistical model derived from the data based on the North American Regional Reanalysis. The surrogate climate realizations generated by this model are independent from, yet nearly statistically congruent with reality. This data set provides unique opportunities for the analysis of weather-related risk, with applications in agriculture, energy development, and protection of human life.
A virtual climate library of surface temperature over North America for 1979–2015
Kravtsov, Sergey; Roebber, Paul; Brazauskas, Vytaras
2017-01-01
The most comprehensive continuous-coverage modern climatic data sets, known as reanalyses, come from combining state-of-the-art numerical weather prediction (NWP) models with diverse available observations. These reanalysis products estimate the path of climate evolution that actually happened, and their use in a probabilistic context—for example, to document trends in extreme events in response to climate change—is, therefore, limited. Free runs of NWP models without data assimilation can in principle be used for the latter purpose, but such simulations are computationally expensive and are prone to systematic biases. Here we produce a high-resolution, 100-member ensemble simulation of surface atmospheric temperature over North America for the 1979–2015 period using a comprehensive spatially extended non-stationary statistical model derived from the data based on the North American Regional Reanalysis. The surrogate climate realizations generated by this model are independent from, yet nearly statistically congruent with reality. This data set provides unique opportunities for the analysis of weather-related risk, with applications in agriculture, energy development, and protection of human life. PMID:29039842
A virtual climate library of surface temperature over North America for 1979-2015.
Kravtsov, Sergey; Roebber, Paul; Brazauskas, Vytaras
2017-10-17
The most comprehensive continuous-coverage modern climatic data sets, known as reanalyses, come from combining state-of-the-art numerical weather prediction (NWP) models with diverse available observations. These reanalysis products estimate the path of climate evolution that actually happened, and their use in a probabilistic context-for example, to document trends in extreme events in response to climate change-is, therefore, limited. Free runs of NWP models without data assimilation can in principle be used for the latter purpose, but such simulations are computationally expensive and are prone to systematic biases. Here we produce a high-resolution, 100-member ensemble simulation of surface atmospheric temperature over North America for the 1979-2015 period using a comprehensive spatially extended non-stationary statistical model derived from the data based on the North American Regional Reanalysis. The surrogate climate realizations generated by this model are independent from, yet nearly statistically congruent with reality. This data set provides unique opportunities for the analysis of weather-related risk, with applications in agriculture, energy development, and protection of human life.
Bayesian models: A statistical primer for ecologists
Hobbs, N. Thompson; Hooten, Mevin B.
2015-01-01
Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models
NASA Technical Reports Server (NTRS)
Shumka, A.; Sollock, S. G.
1981-01-01
This paper represents the first comprehensive survey of the Mount Laguna Photovoltaic Installation. The novel techniques used for performing the field tests have been effective in locating and characterizing defective modules. A comparative analysis on the two types of modules used in the array indicates that they have significantly different failure rates, different distributions in degradational space and very different failure modes. A life cycle model is presented to explain a multimodal distribution observed for one module type. A statistical model is constructed and it is shown to be in good agreement with the field data.
Estimating the Diets of Animals Using Stable Isotopes and a Comprehensive Bayesian Mixing Model
Hopkins, John B.; Ferguson, Jake M.
2012-01-01
Using stable isotope mixing models (SIMMs) as a tool to investigate the foraging ecology of animals is gaining popularity among researchers. As a result, statistical methods are rapidly evolving and numerous models have been produced to estimate the diets of animals—each with their benefits and their limitations. Deciding which SIMM to use is contingent on factors such as the consumer of interest, its food sources, sample size, the familiarity a user has with a particular framework for statistical analysis, or the level of inference the researcher desires to make (e.g., population- or individual-level). In this paper, we provide a review of commonly used SIMM models and describe a comprehensive SIMM that includes all features commonly used in SIMM analysis and two new features. We used data collected in Yosemite National Park to demonstrate IsotopeR's ability to estimate dietary parameters. We then examined the importance of each feature in the model and compared our results to inferences from commonly used SIMMs. IsotopeR's user interface (in R) will provide researchers a user-friendly tool for SIMM analysis. The model is also applicable for use in paleontology, archaeology, and forensic studies as well as estimating pollution inputs. PMID:22235246
NASA Astrophysics Data System (ADS)
Torres Irribarra, D.; Freund, R.; Fisher, W.; Wilson, M.
2015-02-01
Computer-based, online assessments modelled, designed, and evaluated for adaptively administered invariant measurement are uniquely suited to defining and maintaining traceability to standardized units in education. An assessment of this kind is embedded in the Assessing Data Modeling and Statistical Reasoning (ADM) middle school mathematics curriculum. Diagnostic information about middle school students' learning of statistics and modeling is provided via computer-based formative assessments for seven constructs that comprise a learning progression for statistics and modeling from late elementary through the middle school grades. The seven constructs are: Data Display, Meta-Representational Competence, Conceptions of Statistics, Chance, Modeling Variability, Theory of Measurement, and Informal Inference. The end product is a web-delivered system built with Ruby on Rails for use by curriculum development teams working with classroom teachers in designing, developing, and delivering formative assessments. The online accessible system allows teachers to accurately diagnose students' unique comprehension and learning needs in a common language of real-time assessment, logging, analysis, feedback, and reporting.
Li, Lin; Dai, Jia-Xi; Xu, Le; Huang, Zhen-Xia; Pan, Qiong; Zhang, Xi; Jiang, Mei-Yun; Chen, Zhao-Hong
2017-06-01
To observe the effect of a rehabilitation intervention on the comprehensive health status of patients with hand burns. Most studies of hand-burn patients have focused on functional recovery. There have been no studies involving a biological-psychological-social rehabilitation model of hand-burn patients. A randomized controlled design was used. Patients with hand burns were recruited to the study, and sixty patients participated. Participants were separated into two groups: (1) The rehabilitation intervention model group (n=30) completed the rehabilitation intervention model, which included the following measures: enhanced social support, intensive health education, comprehensive psychological intervention, and graded exercise. (2) The control group (n=30) completed routine treatment. Intervention lasted 5 weeks. Analysis of variance (ANOVA) and Student's t test were conducted. The rehabilitation intervention group had significantly better scores than the control group for comprehensive health, physical function, psychological function, social function, and general health. The differences between the index scores of the two groups were statistically significant. The rehabilitation intervention improved the comprehensive health status of patients with hand burns and has favorable clinical application. The comprehensive rehabilitation intervention model used here provides scientific guidance for medical staff aiming to improve the integrated health status of hand-burn patients and accelerate their recovery. What does this paper contribute to the wider global clinical community? Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Jacob LaFontaine; Lauren Hay; Stacey Archfield; William Farmer; Julie Kiang
2016-01-01
The U.S. Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the continental US. The portion of the NHM located within the Gulf Coastal Plains and Ozarks Landscape Conservation Cooperative (GCPO LCC) is...
Orientation Examples Showing Application of the C.A.M.P.U.S. Simulation Model.
ERIC Educational Resources Information Center
Hansen, B. L.; Barron, J. G.
This pamphlet contains information and examples intended to show how the University of Toronto C.A.M.P.U.S. model operates. C.A.M.P.U.S. (Comprehensive Analytical Method for Planning in the University Sphere) is a computer model which processes projected enrollment statistics and other necessary information in such a way as to yield time-based…
Wu, Hao
2018-05-01
In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.
Emotion comprehension: the impact of nonverbal intelligence.
Albanese, Ottavia; De Stasio, Simona; Di Chiacchio, Carlo; Fiorilli, Caterina; Pons, Francisco
2010-01-01
A substantial body of research has established that emotion understanding develops throughout early childhood and has identified three hierarchical developmental phases: external, mental, and reflexive. The authors analyzed nonverbal intelligence and its effect on children's improvement of emotion understanding and hypothesized that cognitive level is a consistent predictor of emotion comprehension. In all, 366 children (182 girls, 184 boys) between the ages of 3 and 10 years were tested using the Test of Emotion Comprehension and the Coloured Progressive Matrices. The data obtained by using the path analysis model revealed that nonverbal intelligence was statistically associated with the ability to recognize emotions in the 3 developmental phases. The use of this model showed the significant effect that cognitive aspect plays on the reflexive phase. The authors aim to contribute to the debate about the influence of cognitive factors on emotion understanding.
ERIC Educational Resources Information Center
Osler, James Edward, II; Mansaray, Mahmud
2014-01-01
Many universities and colleges are increasingly concerned about enhancing the comprehension and knowledge of their students, particularly in the classroom. One of the method to enhancing student success is teaching effectiveness. The objective of this research paper is to propose a novel research model which examines the relationship between…
Continuous Evaluation of Fast Processes in Climate Models Using ARM Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zhijin; Sha, Feng; Liu, Yangang
2016-02-02
This five-year award supports the project “Continuous Evaluation of Fast Processes in Climate Models Using ARM Measurements (FASTER)”. The goal of this project is to produce accurate, consistent and comprehensive data sets for initializing both single column models (SCMs) and cloud resolving models (CRMs) using data assimilation. A multi-scale three-dimensional variational data assimilation scheme (MS-3DVAR) has been implemented. This MS-3DVAR system is built on top of WRF/GSI. The Community Gridpoint Statistical Interpolation (GSI) system is an operational data assimilation system at the National Centers for Environmental Prediction (NCEP) and has been implemented in the Weather Research and Forecast (WRF) model.more » This MS-3DVAR is further enhanced by the incorporation of a land surface 3DVAR scheme and a comprehensive aerosol 3DVAR scheme. The data assimilation implementation focuses in the ARM SGP region. ARM measurements are assimilated along with other available satellite and radar data. Reanalyses are then generated for a few selected period of time. This comprehensive data assimilation system has also been employed for other ARM-related applications.« less
Probabilistic Graphical Model Representation in Phylogenetics
Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.
2014-01-01
Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559
2009-09-01
SAS Statistical Analysis Software SE Systems Engineering SEP Systems Engineering Process SHP Shaft Horsepower SIGINT Signals Intelligence......management occurs (OSD 2002). The Systems Engineering Process (SEP), displayed in Figure 2, is a comprehensive , iterative and recursive problem
ERIC Educational Resources Information Center
Ernst and Ernst, Chicago, IL.
Part 1 of the appendix to "A Model for the Determination of the Costs of Special Education as Compared with That for General Education" contains comprehensive descriptive and statistical information on Ernstville, a hypothetical school district conceived to illustrate the operation of a proposed cost accounting system. Included are sections on…
Ocean Surface Wave Optical Roughness - Innovative Measurement and Modeling
2008-01-01
e.g. Jessup and Phadnis , 2005) have been reported. Our effort seeks to provide a more comprehensive description of the physical and optical roughness...1986: Statistics of breaking waves observed as whitecaps in the open sea, Journal of Physical Oceanography, 16, 290-297. Jessup, A.T. and Phadnis
1988-09-01
S P a .E REPORT DOCUMENTATION PAGE OMR;oJ ’ , CRR Eo Dale n2 ;R6 ’a 4EPOR- SCRFT CASS F.C.T ON ’b RES’RICTI’,E MARKINGS Unclassified a ECRIT y...and selection of test waves 30. Measured prototype wave data on which a comprehensive statistical analysis of wave conditions could be based were...Tests Existing conditions 32. Prior to testing of the various improvement plans, comprehensive tests were conducted for existing conditions (Plate 1
von Krogh, Gunn; Nåden, Dagfinn; Aasland, Olaf Gjerløw
2012-10-01
To present the results from the test site application of the documentation model KPO (quality assurance, problem solving and caring) designed to impact the quality of nursing information in electronic patient record (EPR). The KPO model was developed by means of consensus group and clinical testing. Four documentation arenas and eight content categories, nursing terminologies and a decision-support system were designed to impact the completeness, comprehensiveness and consistency of nursing information. The testing was performed in a pre-test/post-test time series design, three times at a one-year interval. Content analysis of nursing documentation was accomplished through the identification, interpretation and coding of information units. Data from the pre-test and post-test 2 were subjected to statistical analyses. To estimate the differences, paired t-tests were used. At post-test 2, the information is found to be more complete, comprehensive and consistent than at pre-test. The findings indicate that documentation arenas combining work flow and content categories deduced from theories on nursing practice can influence the quality of nursing information. The KPO model can be used as guide when shifting from paper-based to electronic-based nursing documentation with the aim of obtaining complete, comprehensive and consistent nursing information. © 2012 Blackwell Publishing Ltd.
SSD for R: A Comprehensive Statistical Package to Analyze Single-System Data
ERIC Educational Resources Information Center
Auerbach, Charles; Schudrich, Wendy Zeitlin
2013-01-01
The need for statistical analysis in single-subject designs presents a challenge, as analytical methods that are applied to group comparison studies are often not appropriate in single-subject research. "SSD for R" is a robust set of statistical functions with wide applicability to single-subject research. It is a comprehensive package…
Quantifying the indirect impacts of climate on agriculture: an inter-method comparison
Calvin, Kate; Fisher-Vanden, Karen
2017-10-27
Climate change and increases in CO2 concentration affect the productivity of land, with implications for land use, land cover, and agricultural production. Much of the literature on the effect of climate on agriculture has focused on linking projections of changes in climate to process-based or statistical crop models. However, the changes in productivity have broader economic implications that cannot be quantified in crop models alone. How important are these socio-economic feedbacks to a comprehensive assessment of the impacts of climate change on agriculture? In this paper, we attempt to measure the importance of these interaction effects through an inter-method comparisonmore » between process models, statistical models, and integrated assessment model (IAMs). We find the impacts on crop yields vary widely between these three modeling approaches. Yield impacts generated by the IAMs are 20%-40% higher than the yield impacts generated by process-based or statistical crop models, with indirect climate effects adjusting yields by between - 12% and + 15% (e.g. input substitution and crop switching). The remaining effects are due to technological change.« less
Quantifying the indirect impacts of climate on agriculture: an inter-method comparison
NASA Astrophysics Data System (ADS)
Calvin, Kate; Fisher-Vanden, Karen
2017-11-01
Climate change and increases in CO2 concentration affect the productivity of land, with implications for land use, land cover, and agricultural production. Much of the literature on the effect of climate on agriculture has focused on linking projections of changes in climate to process-based or statistical crop models. However, the changes in productivity have broader economic implications that cannot be quantified in crop models alone. How important are these socio-economic feedbacks to a comprehensive assessment of the impacts of climate change on agriculture? In this paper, we attempt to measure the importance of these interaction effects through an inter-method comparison between process models, statistical models, and integrated assessment model (IAMs). We find the impacts on crop yields vary widely between these three modeling approaches. Yield impacts generated by the IAMs are 20%-40% higher than the yield impacts generated by process-based or statistical crop models, with indirect climate effects adjusting yields by between -12% and +15% (e.g. input substitution and crop switching). The remaining effects are due to technological change.
Quantifying the indirect impacts of climate on agriculture: an inter-method comparison
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calvin, Kate; Fisher-Vanden, Karen
Climate change and increases in CO2 concentration affect the productivity of land, with implications for land use, land cover, and agricultural production. Much of the literature on the effect of climate on agriculture has focused on linking projections of changes in climate to process-based or statistical crop models. However, the changes in productivity have broader economic implications that cannot be quantified in crop models alone. How important are these socio-economic feedbacks to a comprehensive assessment of the impacts of climate change on agriculture? In this paper, we attempt to measure the importance of these interaction effects through an inter-method comparisonmore » between process models, statistical models, and integrated assessment model (IAMs). We find the impacts on crop yields vary widely between these three modeling approaches. Yield impacts generated by the IAMs are 20%-40% higher than the yield impacts generated by process-based or statistical crop models, with indirect climate effects adjusting yields by between - 12% and + 15% (e.g. input substitution and crop switching). The remaining effects are due to technological change.« less
2014-09-01
14-7 ii Abstract The U.S. North Atlantic coast is subject to coastal flooding as a result of both severe extratropical storms (e.g., Nor’easters...Products and Services, excluding any kind of high-resolution hydrodynamic modeling. Tropical and extratropical storms were treated as a single...joint probability analysis and high-fidelity modeling of tropical and extratropical storms
Late paleozoic fusulinoidean gigantism driven by atmospheric hyperoxia.
Payne, Jonathan L; Groves, John R; Jost, Adam B; Nguyen, Thienan; Moffitt, Sarah E; Hill, Tessa M; Skotheim, Jan M
2012-09-01
Atmospheric hyperoxia, with pO(2) in excess of 30%, has long been hypothesized to account for late Paleozoic (360-250 million years ago) gigantism in numerous higher taxa. However, this hypothesis has not been evaluated statistically because comprehensive size data have not been compiled previously at sufficient temporal resolution to permit quantitative analysis. In this study, we test the hyperoxia-gigantism hypothesis by examining the fossil record of fusulinoidean foraminifers, a dramatic example of protistan gigantism with some individuals exceeding 10 cm in length and exceeding their relatives by six orders of magnitude in biovolume. We assembled and examined comprehensive regional and global, species-level datasets containing 270 and 1823 species, respectively. A statistical model of size evolution forced by atmospheric pO(2) is conclusively favored over alternative models based on random walks or a constant tendency toward size increase. Moreover, the ratios of volume to surface area in the largest fusulinoideans are consistent in magnitude and trend with a mathematical model based on oxygen transport limitation. We further validate the hyperoxia-gigantism model through an examination of modern foraminiferal species living along a measured gradient in oxygen concentration. These findings provide the first quantitative confirmation of a direct connection between Paleozoic gigantism and atmospheric hyperoxia. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
Statistics for X-chromosome associations.
Özbek, Umut; Lin, Hui-Min; Lin, Yan; Weeks, Daniel E; Chen, Wei; Shaffer, John R; Purcell, Shaun M; Feingold, Eleanor
2018-06-13
In a genome-wide association study (GWAS), association between genotype and phenotype at autosomal loci is generally tested by regression models. However, X-chromosome data are often excluded from published analyses of autosomes because of the difference between males and females in number of X chromosomes. Failure to analyze X-chromosome data at all is obviously less than ideal, and can lead to missed discoveries. Even when X-chromosome data are included, they are often analyzed with suboptimal statistics. Several mathematically sensible statistics for X-chromosome association have been proposed. The optimality of these statistics, however, is based on very specific simple genetic models. In addition, while previous simulation studies of these statistics have been informative, they have focused on single-marker tests and have not considered the types of error that occur even under the null hypothesis when the entire X chromosome is scanned. In this study, we comprehensively tested several X-chromosome association statistics using simulation studies that include the entire chromosome. We also considered a wide range of trait models for sex differences and phenotypic effects of X inactivation. We found that models that do not incorporate a sex effect can have large type I error in some cases. We also found that many of the best statistics perform well even when there are modest deviations, such as trait variance differences between the sexes or small sex differences in allele frequencies, from assumptions. © 2018 WILEY PERIODICALS, INC.
Ozaki, Vitor A.; Ghosh, Sujit K.; Goodwin, Barry K.; Shirota, Ricardo
2009-01-01
This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Paraná (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited. PMID:19890450
Modeling to Optimize Terminal Stem Cell Differentiation
Gallicano, G. Ian
2013-01-01
Embryonic stem cell (ESC), iPCs, and adult stem cells (ASCs) all are among the most promising potential treatments for heart failure, spinal cord injury, neurodegenerative diseases, and diabetes. However, considerable uncertainty in the production of ESC-derived terminally differentiated cell types has limited the efficiency of their development. To address this uncertainty, we and other investigators have begun to employ a comprehensive statistical model of ESC differentiation for determining the role of intracellular pathways (e.g., STAT3) in ESC differentiation and determination of germ layer fate. The approach discussed here applies the Baysian statistical model to cell/developmental biology combining traditional flow cytometry methodology and specific morphological observations with advanced statistical and probabilistic modeling and experimental design. The final result of this study is a unique tool and model that enhances the understanding of how and when specific cell fates are determined during differentiation. This model provides a guideline for increasing the production efficiency of therapeutically viable ESCs/iPSCs/ASC derived neurons or any other cell type and will eventually lead to advances in stem cell therapy. PMID:24278782
Experience and Sentence Processing: Statistical Learning and Relative Clause Comprehension
Wells, Justine B.; Christiansen, Morten H.; Race, David S.; Acheson, Daniel J.; MacDonald, Maryellen C.
2009-01-01
Many explanations of the difficulties associated with interpreting object relative clauses appeal to the demands that object relatives make on working memory. MacDonald and Christiansen (2002) pointed to variations in reading experience as a source of differences, arguing that the unique word order of object relatives makes their processing more difficult and more sensitive to the effects of previous experience than the processing of subject relatives. This hypothesis was tested in a large-scale study manipulating reading experiences of adults over several weeks. The group receiving relative clause experience increased reading speeds for object relatives more than for subject relatives, whereas a control experience group did not. The reading time data were compared to performance of a computational model given different amounts of experience. The results support claims for experience-based individual differences and an important role for statistical learning in sentence comprehension processes. PMID:18922516
RAD-ADAPT: Software for modelling clonogenic assay data in radiation biology.
Zhang, Yaping; Hu, Kaiqiang; Beumer, Jan H; Bakkenist, Christopher J; D'Argenio, David Z
2017-04-01
We present a comprehensive software program, RAD-ADAPT, for the quantitative analysis of clonogenic assays in radiation biology. Two commonly used models for clonogenic assay analysis, the linear-quadratic model and single-hit multi-target model, are included in the software. RAD-ADAPT uses maximum likelihood estimation method to obtain parameter estimates with the assumption that cell colony count data follow a Poisson distribution. The program has an intuitive interface, generates model prediction plots, tabulates model parameter estimates, and allows automatic statistical comparison of parameters between different groups. The RAD-ADAPT interface is written using the statistical software R and the underlying computations are accomplished by the ADAPT software system for pharmacokinetic/pharmacodynamic systems analysis. The use of RAD-ADAPT is demonstrated using an example that examines the impact of pharmacologic ATM and ATR kinase inhibition on human lung cancer cell line A549 after ionizing radiation. Copyright © 2017 Elsevier B.V. All rights reserved.
Stratification of Recanalization for Patients with Endovascular Treatment of Intracranial Aneurysms
Ogilvy, Christopher S.; Chua, Michelle H.; Fusco, Matthew R.; Reddy, Arra S.; Thomas, Ajith J.
2015-01-01
Background With increasing utilization of endovascular techniques in the treatment of both ruptured and unruptured intracranial aneurysms, the issue of obliteration efficacy has become increasingly important. Objective Our goal was to systematically develop a comprehensive model for predicting retreatment with various types of endovascular treatment. Methods We retrospectively reviewed medical records that were prospectively collected for 305 patients who received endovascular treatment for intracranial aneurysms from 2007 to 2013. Multivariable logistic regression was performed on candidate predictors identified by univariable screening analysis to detect independent predictors of retreatment. A composite risk score was constructed based on the proportional contribution of independent predictors in the multivariable model. Results Size (>10 mm), aneurysm rupture, stent assistance, and post-treatment degree of aneurysm occlusion were independently associated with retreatment while intraluminal thrombosis and flow diversion demonstrated a trend towards retreatment. The Aneurysm Recanalization Stratification Scale was constructed by assigning the following weights to statistically and clinically significant predictors. Aneurysm-specific factors: Size (>10 mm), 2 points; rupture, 2 points; presence of thrombus, 2 points. Treatment-related factors: Stent assistance, -1 point; flow diversion, -2 points; Raymond Roy 2 occlusion, 1 point; Raymond Roy 3 occlusion, 2 points. This scale demonstrated good discrimination with a C-statistic of 0.799. Conclusion Surgical decision-making and patient-centered informed consent require comprehensive and accessible information on treatment efficacy. We have constructed the Aneurysm Recanalization Stratification Scale to enhance this decision-making process. This is the first comprehensive model that has been developed to quantitatively predict the risk of retreatment following endovascular therapy. PMID:25621984
Text-Based Recall and Extra-Textual Generations Resulting from Simplified and Authentic Texts
ERIC Educational Resources Information Center
Crossley, Scott A.; McNamara, Danielle S.
2016-01-01
This study uses a moving windows self-paced reading task to assess text comprehension of beginning and intermediate-level simplified texts and authentic texts by L2 learners engaged in a text-retelling task. Linear mixed effects (LME) models revealed statistically significant main effects for reading proficiency and text level on the number of…
Global, Local, and Graphical Person-Fit Analysis Using Person-Response Functions
ERIC Educational Resources Information Center
Emons, Wilco H. M.; Sijtsma, Klaas; Meijer, Rob R.
2005-01-01
Person-fit statistics test whether the likelihood of a respondent's complete vector of item scores on a test is low given the hypothesized item response theory model. This binary information may be insufficient for diagnosing the cause of a misfitting item-score vector. The authors propose a comprehensive methodology for person-fit analysis in the…
Comparing the Effectiveness of SPSS and EduG Using Different Designs for Generalizability Theory
ERIC Educational Resources Information Center
Teker, Gulsen Tasdelen; Guler, Nese; Uyanik, Gulden Kaya
2015-01-01
Generalizability theory (G theory) provides a broad conceptual framework for social sciences such as psychology and education, and a comprehensive construct for numerous measurement events by using analysis of variance, a strong statistical method. G theory, as an extension of both classical test theory and analysis of variance, is a model which…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kollias, Pavlos
2017-08-08
This is a multi-institutional, collaborative project using observations and modeling to study the evolution (e.g. formation and growth) of hydrometeors in continental convective clouds. Our contribution was in data analysis for the generation of high-value cloud and precipitation products and derive cloud statistics for model validation. There are two areas in data analysis that we contributed: i) the development of novel, state-of-the-art dual-wavelength radar algorithms for the retrieval of cloud microphysical properties and ii) the evaluation of large domain, high-resolution models using comprehensive multi-sensor observations. Our research group developed statistical summaries from numerous sensors and developed retrievals of vertical airmore » motion in deep convection.« less
Bahlmann, Claus; Burkhardt, Hans
2004-03-01
In this paper, we give a comprehensive description of our writer-independent online handwriting recognition system frog on hand. The focus of this work concerns the presentation of the classification/training approach, which we call cluster generative statistical dynamic time warping (CSDTW). CSDTW is a general, scalable, HMM-based method for variable-sized, sequential data that holistically combines cluster analysis and statistical sequence modeling. It can handle general classification problems that rely on this sequential type of data, e.g., speech recognition, genome processing, robotics, etc. Contrary to previous attempts, clustering and statistical sequence modeling are embedded in a single feature space and use a closely related distance measure. We show character recognition experiments of frog on hand using CSDTW on the UNIPEN online handwriting database. The recognition accuracy is significantly higher than reported results of other handwriting recognition systems. Finally, we describe the real-time implementation of frog on hand on a Linux Compaq iPAQ embedded device.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo I.; Shlesinger, Michael F.
2012-01-01
We introduce and explore a Stochastic Flow Cascade (SFC) model: A general statistical model for the unidirectional flow through a tandem array of heterogeneous filters. Examples include the flow of: (i) liquid through heterogeneous porous layers; (ii) shocks through tandem shot noise systems; (iii) signals through tandem communication filters. The SFC model combines together the Langevin equation, convolution filters and moving averages, and Poissonian randomizations. A comprehensive analysis of the SFC model is carried out, yielding closed-form results. Lévy laws are shown to universally emerge from the SFC model, and characterize both heavy tailed retention times (Noah effect) and long-ranged correlations (Joseph effect).
Underwater Sound Propagation Modeling Methods for Predicting Marine Animal Exposure.
Hamm, Craig A; McCammon, Diana F; Taillefer, Martin L
2016-01-01
The offshore exploration and production (E&P) industry requires comprehensive and accurate ocean acoustic models for determining the exposure of marine life to the high levels of sound used in seismic surveys and other E&P activities. This paper reviews the types of acoustic models most useful for predicting the propagation of undersea noise sources and describes current exposure models. The severe problems caused by model sensitivity to the uncertainty in the environment are highlighted to support the conclusion that it is vital that risk assessments include transmission loss estimates with statistical measures of confidence.
Computational methods to extract meaning from text and advance theories of human cognition.
McNamara, Danielle S
2011-01-01
Over the past two decades, researchers have made great advances in the area of computational methods for extracting meaning from text. This research has to a large extent been spurred by the development of latent semantic analysis (LSA), a method for extracting and representing the meaning of words using statistical computations applied to large corpora of text. Since the advent of LSA, researchers have developed and tested alternative statistical methods designed to detect and analyze meaning in text corpora. This research exemplifies how statistical models of semantics play an important role in our understanding of cognition and contribute to the field of cognitive science. Importantly, these models afford large-scale representations of human knowledge and allow researchers to explore various questions regarding knowledge, discourse processing, text comprehension, and language. This topic includes the latest progress by the leading researchers in the endeavor to go beyond LSA. Copyright © 2010 Cognitive Science Society, Inc.
Tjam, Erin Y; Heckman, George A; Smith, Stuart; Arai, Bruce; Hirdes, John; Poss, Jeff; McKelvie, Robert S
2012-02-23
Though the NYHA functional classification is recommended in clinical settings, concerns have been raised about its reliability particularly among older patients. The RAI 2.0 is a comprehensive assessment system specifically developed for frail seniors. We hypothesized that a prognostic model for heart failure (HF) developed from the RAI 2.0 would be superior to the NYHA classification. The purpose of this study was to determine whether a HF-specific prognostic model based on the RAI 2.0 is superior to the NYHA functional classification in predicting mortality in frail older HF patients. Secondary analysis of data from a prospective cohort study of a HF education program for care providers in long-term care and retirement homes. Univariate analyses identified RAI 2.0 variables predicting death at 6 months. These and the NYHA classification were used to develop logistic models. Two RAI 2.0 models were derived. The first includes six items: "weight gain of 5% or more of total body weight over 30 days", "leaving 25% or more food uneaten", "unable to lie flat", "unstable cognitive, ADL, moods, or behavioural patterns", "change in cognitive function" and "needing help to walk in room"; the C statistic was 0.866. The second includes the CHESS health instability scale and the item "requiring help walking in room"; the C statistic was 0.838. The C statistic for the NYHA scale was 0.686. These results suggest that data from the RAI 2.0, an instrument for comprehensive assessment of frail seniors, can better predict mortality than the NYHA classification. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lafontaine, J.; Hay, L.; Archfield, S. A.; Farmer, W. H.; Kiang, J. E.
2014-12-01
The U.S. Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the continental US. The portion of the NHM located within the Gulf Coastal Plains and Ozarks Landscape Conservation Cooperative (GCPO LCC) is being used to test the feasibility of improving streamflow simulations in gaged and ungaged watersheds by linking statistically- and physically-based hydrologic models. The GCPO LCC covers part or all of 12 states and 5 sub-geographies, totaling approximately 726,000 km2, and is centered on the lower Mississippi Alluvial Valley. A total of 346 USGS streamgages in the GCPO LCC region were selected to evaluate the performance of this new calibration methodology for the period 1980 to 2013. Initially, the physically-based models are calibrated to measured streamflow data to provide a baseline for comparison. An enhanced calibration procedure then is used to calibrate the physically-based models in the gaged and ungaged areas of the GCPO LCC using statistically-based estimates of streamflow. For this application, the calibration procedure is adjusted to address the limitations of the statistically generated time series to reproduce measured streamflow in gaged basins, primarily by incorporating error and bias estimates. As part of this effort, estimates of uncertainty in the model simulations are also computed for the gaged and ungaged watersheds.
Individual Differences in Statistical Learning Predict Children's Comprehension of Syntax
ERIC Educational Resources Information Center
Kidd, Evan; Arciuli, Joanne
2016-01-01
Variability in children's language acquisition is likely due to a number of cognitive and social variables. The current study investigated whether individual differences in statistical learning (SL), which has been implicated in language acquisition, independently predicted 6- to 8-year-old's comprehension of syntax. Sixty-eight (N = 68)…
Governance and Regional Variation of Homicide Rates: Evidence From Cross-National Data.
Cao, Liqun; Zhang, Yan
2017-01-01
Criminological theories of cross-national studies of homicide have underestimated the effects of quality governance of liberal democracy and region. Data sets from several sources are combined and a comprehensive model of homicide is proposed. Results of the spatial regression model, which controls for the effect of spatial autocorrelation, show that quality governance, human development, economic inequality, and ethnic heterogeneity are statistically significant in predicting homicide. In addition, regions of Latin America and non-Muslim Sub-Saharan Africa have significantly higher rates of homicides ceteris paribus while the effects of East Asian countries and Islamic societies are not statistically significant. These findings are consistent with the expectation of the new modernization and regional theories. © The Author(s) 2015.
Load Model Verification, Validation and Calibration Framework by Statistical Analysis on Field Data
NASA Astrophysics Data System (ADS)
Jiao, Xiangqing; Liao, Yuan; Nguyen, Thai
2017-11-01
Accurate load models are critical for power system analysis and operation. A large amount of research work has been done on load modeling. Most of the existing research focuses on developing load models, while little has been done on developing formal load model verification and validation (V&V) methodologies or procedures. Most of the existing load model validation is based on qualitative rather than quantitative analysis. In addition, not all aspects of model V&V problem have been addressed by the existing approaches. To complement the existing methods, this paper proposes a novel load model verification and validation framework that can systematically and more comprehensively examine load model's effectiveness and accuracy. Statistical analysis, instead of visual check, quantifies the load model's accuracy, and provides a confidence level of the developed load model for model users. The analysis results can also be used to calibrate load models. The proposed framework can be used as a guidance to systematically examine load models for utility engineers and researchers. The proposed method is demonstrated through analysis of field measurements collected from a utility system.
Li, Gaoming; Yi, Dali; Wu, Xiaojiao; Liu, Xiaoyu; Zhang, Yanqi; Liu, Ling; Yi, Dong
2015-01-01
Background Although a substantial number of studies focus on the teaching and application of medical statistics in China, few studies comprehensively evaluate the recognition of and demand for medical statistics. In addition, the results of these various studies differ and are insufficiently comprehensive and systematic. Objectives This investigation aimed to evaluate the general cognition of and demand for medical statistics by undergraduates, graduates, and medical staff in China. Methods We performed a comprehensive database search related to the cognition of and demand for medical statistics from January 2007 to July 2014 and conducted a meta-analysis of non-controlled studies with sub-group analysis for undergraduates, graduates, and medical staff. Results There are substantial differences with respect to the cognition of theory in medical statistics among undergraduates (73.5%), graduates (60.7%), and medical staff (39.6%). The demand for theory in medical statistics is high among graduates (94.6%), undergraduates (86.1%), and medical staff (88.3%). Regarding specific statistical methods, the cognition of basic statistical methods is higher than of advanced statistical methods. The demand for certain advanced statistical methods, including (but not limited to) multiple analysis of variance (ANOVA), multiple linear regression, and logistic regression, is higher than that for basic statistical methods. The use rates of the Statistical Package for the Social Sciences (SPSS) software and statistical analysis software (SAS) are only 55% and 15%, respectively. Conclusion The overall statistical competence of undergraduates, graduates, and medical staff is insufficient, and their ability to practically apply their statistical knowledge is limited, which constitutes an unsatisfactory state of affairs for medical statistics education. Because the demand for skills in this area is increasing, the need to reform medical statistics education in China has become urgent. PMID:26053876
Wu, Yazhou; Zhou, Liang; Li, Gaoming; Yi, Dali; Wu, Xiaojiao; Liu, Xiaoyu; Zhang, Yanqi; Liu, Ling; Yi, Dong
2015-01-01
Although a substantial number of studies focus on the teaching and application of medical statistics in China, few studies comprehensively evaluate the recognition of and demand for medical statistics. In addition, the results of these various studies differ and are insufficiently comprehensive and systematic. This investigation aimed to evaluate the general cognition of and demand for medical statistics by undergraduates, graduates, and medical staff in China. We performed a comprehensive database search related to the cognition of and demand for medical statistics from January 2007 to July 2014 and conducted a meta-analysis of non-controlled studies with sub-group analysis for undergraduates, graduates, and medical staff. There are substantial differences with respect to the cognition of theory in medical statistics among undergraduates (73.5%), graduates (60.7%), and medical staff (39.6%). The demand for theory in medical statistics is high among graduates (94.6%), undergraduates (86.1%), and medical staff (88.3%). Regarding specific statistical methods, the cognition of basic statistical methods is higher than of advanced statistical methods. The demand for certain advanced statistical methods, including (but not limited to) multiple analysis of variance (ANOVA), multiple linear regression, and logistic regression, is higher than that for basic statistical methods. The use rates of the Statistical Package for the Social Sciences (SPSS) software and statistical analysis software (SAS) are only 55% and 15%, respectively. The overall statistical competence of undergraduates, graduates, and medical staff is insufficient, and their ability to practically apply their statistical knowledge is limited, which constitutes an unsatisfactory state of affairs for medical statistics education. Because the demand for skills in this area is increasing, the need to reform medical statistics education in China has become urgent.
Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry
2013-08-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.
Kim, Yoonsang; Emery, Sherry
2013-01-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415
Protein and gene model inference based on statistical modeling in k-partite graphs.
Gerster, Sarah; Qeli, Ermir; Ahrens, Christian H; Bühlmann, Peter
2010-07-06
One of the major goals of proteomics is the comprehensive and accurate description of a proteome. Shotgun proteomics, the method of choice for the analysis of complex protein mixtures, requires that experimentally observed peptides are mapped back to the proteins they were derived from. This process is also known as protein inference. We present Markovian Inference of Proteins and Gene Models (MIPGEM), a statistical model based on clearly stated assumptions to address the problem of protein and gene model inference for shotgun proteomics data. In particular, we are dealing with dependencies among peptides and proteins using a Markovian assumption on k-partite graphs. We are also addressing the problems of shared peptides and ambiguous proteins by scoring the encoding gene models. Empirical results on two control datasets with synthetic mixtures of proteins and on complex protein samples of Saccharomyces cerevisiae, Drosophila melanogaster, and Arabidopsis thaliana suggest that the results with MIPGEM are competitive with existing tools for protein inference.
Scott, Jessica A; Hoffmeister, Robert J
2017-01-01
For many years, researchers have sought to understand the reading development of deaf and hard of hearing (DHH) students. Guided by prior research on DHH and hearing students, in this study we investigate the hypothesis that for secondary school DHH students enrolled in American Sign Language (ASL)/English bilingual schools for the deaf, academic English proficiency would be a significant predictor of reading comprehension alongside ASL proficiency. Using linear regression, we found statistically significant interaction effects between academic English knowledge and word reading fluency in predicting the reading comprehension scores of the participants. However, ASL remained the strongest and most consistent predictor of reading comprehension within the sample. Findings support a model in which socio-demographic factors, ASL proficiency, and word reading fluency are primary predictors of reading comprehension for secondary DHH students. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@.com.
ERIC Educational Resources Information Center
Parrott, Roxanne; Silk, Kami; Dorgan, Kelly; Condit, Celeste; Harris, Tina
2005-01-01
Too little theory and research has considered the effects of communicating statistics in various forms on comprehension, perceptions of evidence quality, or evaluations of message persuasiveness. In a considered extension of Subjective Message Construct Theory (Morley, 1987), we advance a rationale relating evidence form to the formation of…
NASA Astrophysics Data System (ADS)
Marks, D. G.; Kormos, P.; Johnson, M.; Bormann, K. J.; Hedrick, A. R.; Havens, S.; Robertson, M.; Painter, T. H.
2017-12-01
Lidar-derived snow depths when combined with modeled or estimated snow density can provide reliable estimates of the distribution of SWE over large mountain areas. Application of this approach is transforming western snow hydrology. We present a comprehensive approach toward modeling bulk snow density that is reliable over a vast range of weather and snow conditions. The method is applied and evaluated over mountainous regions of California, Idaho, Oregon and Colorado in the western US. Simulated and measured snow density are compared at fourteen validation sites across the western US where measurements of snow mass (SWE) and depth are co-located. Fitting statistics for ten sites from three mountain catchments (two in Idaho, one in California) show an average Nash-Sutcliff model efficiency coefficient of 0.83, and mean bias of 4 kg m-3. Results illustrate issues associated with monitoring snow depth and SWE and show the effectiveness of the model, with a small mean bias across a range of snow and climate conditions in the west.
A comprehensive review of thin-layer drying models used in agricultural products.
Ertekin, Can; Firat, M Ziya
2017-03-04
Drying is one of the widely used methods of grain, fruit, and vegetable preservation. The important aim of drying is to reduce the moisture content and thereby increase the lifetime of products by limiting enzymatic and oxidative degradation. In addition, by reducing the amount of water, drying reduces the crop losses, improves the quality of dried products, and facilitates its transportation, handling, and storage requirements. Drying is a process comprising simultaneous heat and mass transfer within the material, and between the surface of the material and the surrounding media. Many models have been used to describe the drying process for different agricultural products. These models are used to estimate drying time of several products under different drying conditions, and how to increase the drying process efficiency and also to generalize drying curves, for the design and operation of dryers. Several investigators have proposed numerous mathematical models for thin-layer drying of many agricultural products. This study gives a comprehensive review of more than 100 different semitheoretical and empirical thin-layer drying models used in agricultural products and evaluates the statistical criteria for the determination of appropriate model.
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Annis, Charles; Sabbagh, Harold A.; Lindgren, Eric A.
2016-02-01
A comprehensive approach to NDE and SHM characterization error (CE) evaluation is presented that follows the framework of the `ahat-versus-a' regression analysis for POD assessment. Characterization capability evaluation is typically more complex with respect to current POD evaluations and thus requires engineering and statistical expertise in the model-building process to ensure all key effects and interactions are addressed. Justifying the statistical model choice with underlying assumptions is key. Several sizing case studies are presented with detailed evaluations of the most appropriate statistical model for each data set. The use of a model-assisted approach is introduced to help assess the reliability of NDE and SHM characterization capability under a wide range of part, environmental and damage conditions. Best practices of using models are presented for both an eddy current NDE sizing and vibration-based SHM case studies. The results of these studies highlight the general protocol feasibility, emphasize the importance of evaluating key application characteristics prior to the study, and demonstrate an approach to quantify the role of varying SHM sensor durability and environmental conditions on characterization performance.
Mathematical Modelling for Patient Selection in Proton Therapy.
Mee, T; Kirkby, N F; Kirkby, K J
2018-05-01
Proton beam therapy (PBT) is still relatively new in cancer treatment and the clinical evidence base is relatively sparse. Mathematical modelling offers assistance when selecting patients for PBT and predicting the demand for service. Discrete event simulation, normal tissue complication probability, quality-adjusted life-years and Markov Chain models are all mathematical and statistical modelling techniques currently used but none is dominant. As new evidence and outcome data become available from PBT, comprehensive models will emerge that are less dependent on the specific technologies of radiotherapy planning and delivery. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Development of a Comprehensive Digital Avionics Curriculum for the Aeronautical Engineer
2006-03-01
able to analyze and design aircraft and missile guidance and control systems, including feedback stabilization schemes and stochastic processes, using ...Uncertainty modeling for robust control; Robust closed-loop stability and performance; Robust H- infinity control; Robustness check using mu-analysis...Controlled feedback (reduces noise) 3. Statistical group response (reduce pressure toward conformity) When used as a tool to study a complex problem
A New Mathematical Framework for Design Under Uncertainty
2016-05-05
blending multiple information sources via auto-regressive stochastic modeling. A computationally efficient machine learning framework is developed based on...sion and machine learning approaches; see Fig. 1. This will lead to a comprehensive description of system performance with less uncertainty than in the...Bayesian optimization of super-cavitating hy- drofoils The goal of this study is to demonstrate the capabilities of statistical learning and
NASA Astrophysics Data System (ADS)
Kim, E.; Newton, A. P.
2012-04-01
One major problem in dynamo theory is the multi-scale nature of the MHD turbulence, which requires statistical theory in terms of probability distribution functions. In this contribution, we present the statistical theory of magnetic fields in a simplified mean field α-Ω dynamo model by varying the statistical property of alpha, including marginal stability and intermittency, and then utilize observational data of solar activity to fine-tune the mean field dynamo model. Specifically, we first present a comprehensive investigation into the effect of the stochastic parameters in a simplified α-Ω dynamo model. Through considering the manifold of marginal stability (the region of parameter space where the mean growth rate is zero), we show that stochastic fluctuations are conductive to dynamo. Furthermore, by considering the cases of fluctuating alpha that are periodic and Gaussian coloured random noise with identical characteristic time-scales and fluctuating amplitudes, we show that the transition to dynamo is significantly facilitated for stochastic alpha with random noise. Furthermore, we show that probability density functions (PDFs) of the growth-rate, magnetic field and magnetic energy can provide a wealth of useful information regarding the dynamo behaviour/intermittency. Finally, the precise statistical property of the dynamo such as temporal correlation and fluctuating amplitude is found to be dependent on the distribution the fluctuations of stochastic parameters. We then use observations of solar activity to constrain parameters relating to the effect in stochastic α-Ω nonlinear dynamo models. This is achieved through performing a comprehensive statistical comparison by computing PDFs of solar activity from observations and from our simulation of mean field dynamo model. The observational data that are used are the time history of solar activity inferred for C14 data in the past 11000 years on a long time scale and direct observations of the sun spot numbers obtained in recent years 1795-1995 on a short time scale. Monte Carlo simulations are performed on these data to obtain PDFs of the solar activity on both long and short time scales. These PDFs are then compared with predicted PDFs from numerical simulation of our α-Ω dynamo model, where α is assumed to have both mean α0 and fluctuating α' parts. By varying the correlation time of fluctuating α', the ratio of the amplitude of the fluctuating to mean alpha <α'2>/α02 (where angular brackets <> denote ensemble average), and the ratio of poloidal to toroidal magnetic fields, we show that the results from our stochastic dynamo model can match the PDFs of solar activity on both long and short time scales. In particular, a good agreement is obtained when the fluctuation in alpha is roughly equal to the mean part with a correlation time shorter than the solar period.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tratnyek, Paul G.; Bylaska, Eric J.; Weber, Eric J.
2017-01-01
Quantitative structure–activity relationships (QSARs) have long been used in the environmental sciences. More recently, molecular modeling and chemoinformatic methods have become widespread. These methods have the potential to expand and accelerate advances in environmental chemistry because they complement observational and experimental data with “in silico” results and analysis. The opportunities and challenges that arise at the intersection between statistical and theoretical in silico methods are most apparent in the context of properties that determine the environmental fate and effects of chemical contaminants (degradation rate constants, partition coefficients, toxicities, etc.). The main example of this is the calibration of QSARs usingmore » descriptor variable data calculated from molecular modeling, which can make QSARs more useful for predicting property data that are unavailable, but also can make them more powerful tools for diagnosis of fate determining pathways and mechanisms. Emerging opportunities for “in silico environmental chemical science” are to move beyond the calculation of specific chemical properties using statistical models and toward more fully in silico models, prediction of transformation pathways and products, incorporation of environmental factors into model predictions, integration of databases and predictive models into more comprehensive and efficient tools for exposure assessment, and extending the applicability of all the above from chemicals to biologicals and materials.« less
Statistical sensitivity analysis of a simple nuclear waste repository model
NASA Astrophysics Data System (ADS)
Ronen, Y.; Lucius, J. L.; Blow, E. M.
1980-06-01
A preliminary step in a comprehensive sensitivity analysis of the modeling of a nuclear waste repository. The purpose of the complete analysis is to determine which modeling parameters and physical data are most important in determining key design performance criteria and then to obtain the uncertainty in the design for safety considerations. The theory for a statistical screening design methodology is developed for later use in the overall program. The theory was applied to the test case of determining the relative importance of the sensitivity of near field temperature distribution in a single level salt repository to modeling parameters. The exact values of the sensitivities to these physical and modeling parameters were then obtained using direct methods of recalculation. The sensitivity coefficients found to be important for the sample problem were thermal loading, distance between the spent fuel canisters and their radius. Other important parameters were those related to salt properties at a point of interest in the repository.
NASA Astrophysics Data System (ADS)
Bovier, Anton
2006-06-01
Our mathematical understanding of the statistical mechanics of disordered systems is going through a period of stunning progress. This self-contained book is a graduate-level introduction for mathematicians and for physicists interested in the mathematical foundations of the field, and can be used as a textbook for a two-semester course on mathematical statistical mechanics. It assumes only basic knowledge of classical physics and, on the mathematics side, a good working knowledge of graduate-level probability theory. The book starts with a concise introduction to statistical mechanics, proceeds to disordered lattice spin systems, and concludes with a presentation of the latest developments in the mathematical understanding of mean-field spin glass models. In particular, recent progress towards a rigorous understanding of the replica symmetry-breaking solutions of the Sherrington-Kirkpatrick spin glass models, due to Guerra, Aizenman-Sims-Starr and Talagrand, is reviewed in some detail. Comprehensive introduction to an active and fascinating area of research Clear exposition that builds to the state of the art in the mathematics of spin glasses Written by a well-known and active researcher in the field
Damage to ventral and dorsal language pathways in acute aphasia
Hartwigsen, Gesa; Kellmeyer, Philipp; Glauche, Volkmar; Mader, Irina; Klöppel, Stefan; Suchan, Julia; Karnath, Hans-Otto; Weiller, Cornelius; Saur, Dorothee
2013-01-01
Converging evidence from neuroimaging studies and computational modelling suggests an organization of language in a dual dorsal–ventral brain network: a dorsal stream connects temporoparietal with frontal premotor regions through the superior longitudinal and arcuate fasciculus and integrates sensorimotor processing, e.g. in repetition of speech. A ventral stream connects temporal and prefrontal regions via the extreme capsule and mediates meaning, e.g. in auditory comprehension. The aim of our study was to test, in a large sample of 100 aphasic stroke patients, how well acute impairments of repetition and comprehension correlate with lesions of either the dorsal or ventral stream. We combined voxelwise lesion-behaviour mapping with the dorsal and ventral white matter fibre tracts determined by probabilistic fibre tracking in our previous study in healthy subjects. We found that repetition impairments were mainly associated with lesions located in the posterior temporoparietal region with a statistical lesion maximum in the periventricular white matter in projection of the dorsal superior longitudinal and arcuate fasciculus. In contrast, lesions associated with comprehension deficits were found more ventral-anterior in the temporoprefrontal region with a statistical lesion maximum between the insular cortex and the putamen in projection of the ventral extreme capsule. Individual lesion overlap with the dorsal fibre tract showed a significant negative correlation with repetition performance, whereas lesion overlap with the ventral fibre tract revealed a significant negative correlation with comprehension performance. To summarize, our results from patients with acute stroke lesions support the claim that language is organized along two segregated dorsal–ventral streams. Particularly, this is the first lesion study demonstrating that task performance on auditory comprehension measures requires an interaction between temporal and prefrontal brain regions via the ventral extreme capsule pathway. PMID:23378217
Complex patterns of abnormal heartbeats
NASA Technical Reports Server (NTRS)
Schulte-Frohlinde, Verena; Ashkenazy, Yosef; Goldberger, Ary L.; Ivanov, Plamen Ch; Costa, Madalena; Morley-Davies, Adrian; Stanley, H. Eugene; Glass, Leon
2002-01-01
Individuals having frequent abnormal heartbeats interspersed with normal heartbeats may be at an increased risk of sudden cardiac death. However, mechanistic understanding of such cardiac arrhythmias is limited. We present a visual and qualitative method to display statistical properties of abnormal heartbeats. We introduce dynamical "heartprints" which reveal characteristic patterns in long clinical records encompassing approximately 10(5) heartbeats and may provide information about underlying mechanisms. We test if these dynamics can be reproduced by model simulations in which abnormal heartbeats are generated (i) randomly, (ii) at a fixed time interval following a preceding normal heartbeat, or (iii) by an independent oscillator that may or may not interact with the normal heartbeat. We compare the results of these three models and test their limitations to comprehensively simulate the statistical features of selected clinical records. This work introduces methods that can be used to test mathematical models of arrhythmogenesis and to develop a new understanding of underlying electrophysiologic mechanisms of cardiac arrhythmia.
Statistical self-similarity of width function maxima with implications to floods
Veitzer, S.A.; Gupta, V.K.
2001-01-01
Recently a new theory of random self-similar river networks, called the RSN model, was introduced to explain empirical observations regarding the scaling properties of distributions of various topologic and geometric variables in natural basins. The RSN model predicts that such variables exhibit statistical simple scaling, when indexed by Horton-Strahler order. The average side tributary structure of RSN networks also exhibits Tokunaga-type self-similarity which is widely observed in nature. We examine the scaling structure of distributions of the maximum of the width function for RSNs for nested, complete Strahler basins by performing ensemble simulations. The maximum of the width function exhibits distributional simple scaling, when indexed by Horton-Strahler order, for both RSNs and natural river networks extracted from digital elevation models (DEMs). We also test a powerlaw relationship between Horton ratios for the maximum of the width function and drainage areas. These results represent first steps in formulating a comprehensive physical statistical theory of floods at multiple space-time scales for RSNs as discrete hierarchical branching structures. ?? 2001 Published by Elsevier Science Ltd.
Juvenile Psychopathic Personality Traits are Associated with Poor Reading Achievement
DeLisi, Matt; Beaver, Kevin M.; Wexler, Jade; Barth, Amy; Fletcher, Jack
2011-01-01
The current study sought to further the understanding of the linkage between maladaptive behavior and cognitive problems by examining the relations between psychopathic personality traits and reading comprehension performance. Data were derived from a study of 432 predominately African-American and Hispanic middle-school students. Dependent variables consisted of three measures of reading comprehension. Psychopathy measures included the Inventory of Callous-Unemotional traits (ICU—teacher rated) and the self-reported Youth Psychopathic traits Inventory (YPI). Findings from regression models indicated that self-report and teacher ratings of psychopathy were statistically significant inverse predictors of reading performance. Specifically, affective facets of psychopathy were potent predictors of reading comprehension over and above ADHD, IQ, and an impulsivity component of psychopathy. Study results extend the utility of psychopathy construct generally and affective traits specifically to reading achievement, which has broad implications. Findings are discussed with respect to future research and prevention. PMID:20957434
How language production shapes language form and comprehension
MacDonald, Maryellen C.
2012-01-01
Language production processes can provide insight into how language comprehension works and language typology—why languages tend to have certain characteristics more often than others. Drawing on work in memory retrieval, motor planning, and serial order in action planning, the Production-Distribution-Comprehension (PDC) account links work in the fields of language production, typology, and comprehension: (1) faced with substantial computational burdens of planning and producing utterances, language producers implicitly follow three biases in utterance planning that promote word order choices that reduce these burdens, thereby improving production fluency. (2) These choices, repeated over many utterances and individuals, shape the distributions of utterance forms in language. The claim that language form stems in large degree from producers' attempts to mitigate utterance planning difficulty is contrasted with alternative accounts in which form is driven by language use more broadly, language acquisition processes, or producers' attempts to create language forms that are easily understood by comprehenders. (3) Language perceivers implicitly learn the statistical regularities in their linguistic input, and they use this prior experience to guide comprehension of subsequent language. In particular, they learn to predict the sequential structure of linguistic signals, based on the statistics of previously-encountered input. Thus, key aspects of comprehension behavior are tied to lexico-syntactic statistics in the language, which in turn derive from utterance planning biases promoting production of comparatively easy utterance forms over more difficult ones. This approach contrasts with classic theories in which comprehension behaviors are attributed to innate design features of the language comprehension system and associated working memory. The PDC instead links basic features of comprehension to a different source: production processes that shape language form. PMID:23637689
[Comprehension of hazard pictograms of chemical products among cleaning workers].
Martí Fernández, Francesc; van der Haar, Rudolf; López López, Juan Carlos; Portell, Mariona; Torner Solé, Anna
2015-01-01
To assess the comprehension among cleaning workers of the hazard pictograms as defined by the Globally Harmonized System (GHS) of the United Nations, concerning the classification, labeling and packaging of substances and mixtures. A sample of 118 workers was surveyed on their perception of the GHS hazard pictograms. Comprehensibility was measured by the percentage of correct answers and the degree to which they reflected International Organization for Standardization and American National Standards Institute standards for minimum level of comprehension. The influence of different variables to predict comprehension capacity was assessed using a logistic regression model. Three groups of pictograms could be distinguished which were statistically differentiated by their comprehensibility. Pictograms reflecting "acute toxicity" and "flammable", were described correctly by 94% and 95% of the surveyed population, respectively. For pictograms reflecting "systemic toxicity", "corrosive", "warning", "environment" and "explosive" the frequency of correct answers ranged from 48% to 64%, whereas those for pictograms "oxidizing" and "compressed gas" were interpreted correctly by only 7% of respondents. Prognostic factors for poor comprehension included: not being familiar with the pictograms, not having received training on safe use of chemical products, being an immigrant and being 54 years of age or older. Only two pictograms exceeded minimum standards for comprehension. Training, a tool proven to be effective to improve the correct interpretation of danger symbols, should be encouraged, especially in those groups with greater comprehension difficulties. Copyright belongs to the Societat Catalana de Salut Laboral.
Bryant, Fred B
2016-12-01
This paper introduces a special section of the current issue of the Journal of Evaluation in Clinical Practice that includes a set of 6 empirical articles showcasing a versatile, new machine-learning statistical method, known as optimal data (or discriminant) analysis (ODA), specifically designed to produce statistical models that maximize predictive accuracy. As this set of papers clearly illustrates, ODA offers numerous important advantages over traditional statistical methods-advantages that enhance the validity and reproducibility of statistical conclusions in empirical research. This issue of the journal also includes a review of a recently published book that provides a comprehensive introduction to the logic, theory, and application of ODA in empirical research. It is argued that researchers have much to gain by using ODA to analyze their data. © 2016 John Wiley & Sons, Ltd.
Reverse engineering systems models of regulation: discovery, prediction and mechanisms.
Ashworth, Justin; Wurtmann, Elisabeth J; Baliga, Nitin S
2012-08-01
Biological systems can now be understood in comprehensive and quantitative detail using systems biology approaches. Putative genome-scale models can be built rapidly based upon biological inventories and strategic system-wide molecular measurements. Current models combine statistical associations, causative abstractions, and known molecular mechanisms to explain and predict quantitative and complex phenotypes. This top-down 'reverse engineering' approach generates useful organism-scale models despite noise and incompleteness in data and knowledge. Here we review and discuss the reverse engineering of biological systems using top-down data-driven approaches, in order to improve discovery, hypothesis generation, and the inference of biological properties. Copyright © 2011 Elsevier Ltd. All rights reserved.
Selecting the "Best" Factor Structure and Moving Measurement Validation Forward: An Illustration.
Schmitt, Thomas A; Sass, Daniel A; Chappelle, Wayne; Thompson, William
2018-04-09
Despite the broad literature base on factor analysis best practices, research seeking to evaluate a measure's psychometric properties frequently fails to consider or follow these recommendations. This leads to incorrect factor structures, numerous and often overly complex competing factor models and, perhaps most harmful, biased model results. Our goal is to demonstrate a practical and actionable process for factor analysis through (a) an overview of six statistical and psychometric issues and approaches to be aware of, investigate, and report when engaging in factor structure validation, along with a flowchart for recommended procedures to understand latent factor structures; (b) demonstrating these issues to provide a summary of the updated Posttraumatic Stress Disorder Checklist (PCL-5) factor models and a rationale for validation; and (c) conducting a comprehensive statistical and psychometric validation of the PCL-5 factor structure to demonstrate all the issues we described earlier. Considering previous research, the PCL-5 was evaluated using a sample of 1,403 U.S. Air Force remotely piloted aircraft operators with high levels of battlefield exposure. Previously proposed PCL-5 factor structures were not supported by the data, but instead a bifactor model is arguably more statistically appropriate.
NASA Technical Reports Server (NTRS)
Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael G.; Gershunov, Alexander; Gutowski, William J., Jr.; Gyakum, John R.; Katz, Richard W.;
2015-01-01
The objective of this paper is to review statistical methods, dynamics, modeling efforts, and trends related to temperature extremes, with a focus upon extreme events of short duration that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). The statistics, dynamics, and modeling sections of this paper are written to be autonomous and so can be read separately. Methods to define extreme events statistics and to identify and connect LSMPs to extreme temperature events are presented. Recent advances in statistical techniques connect LSMPs to extreme temperatures through appropriately defined covariates that supplement more straightforward analyses. Various LSMPs, ranging from synoptic to planetary scale structures, are associated with extreme temperature events. Current knowledge about the synoptics and the dynamical mechanisms leading to the associated LSMPs is incomplete. Systematic studies of: the physics of LSMP life cycles, comprehensive model assessment of LSMP-extreme temperature event linkages, and LSMP properties are needed. Generally, climate models capture observed properties of heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreak frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Modeling studies have identified the impact of large-scale circulation anomalies and landatmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs to more specifically understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated. The paper concludes with unresolved issues and research questions.
ProUCL version 4.1.00 Documentation Downloads
ProUCL version 4.1.00 represents a comprehensive statistical software package equipped with statistical methods and graphical tools needed to address many environmental sampling and statistical issues as described in various these guidance documents.
Statistical Abstract of the United States: 2012. 131st Edition
ERIC Educational Resources Information Center
US Census Bureau, 2011
2011-01-01
"The Statistical Abstract of the United States," published from 1878 to 2012, is the authoritative and comprehensive summary of statistics on the social, political, and economic organization of the United States. It is designed to serve as a convenient volume for statistical reference, and as a guide to other statistical publications and…
Wang, Chao; Li, Shuang; Li, Tao; Yu, Shanfa; Dai, Junming; Liu, Xiaoman; Zhu, Xiaojun; Ji, Yuqing; Wang, Jin
2016-01-01
Background: This study aimed to identify the association between occupational stress and depression-well-being by proposing a comprehensive and flexible job burden-capital model with its corresponding hypotheses. Methods: For this research, 1618 valid samples were gathered from the electronic manufacturing service industry in Hunan Province, China; self-rated questionnaires were administered to participants for data collection after obtaining their written consent. The proposed model was fitted and tested through structural equation model analysis. Results: Single-factor correlation analysis results indicated that coefficients between all items and dimensions had statistical significance. The final model demonstrated satisfactory global goodness of fit (CMIN/DF = 5.37, AGFI = 0.915, NNFI = 0.945, IFI = 0.952, RMSEA = 0.052). Both the measurement and structural models showed acceptable path loadings. Job burden and capital were directly associated with depression and well-being or indirectly related to them through personality. Multi-group structural equation model analyses indicated general applicability of the proposed model to basic features of such a population. Gender, marriage and education led to differences in the relation between occupational stress and health outcomes. Conclusions: The job burden-capital model of occupational stress-depression and well-being was found to be more systematic and comprehensive than previous models. PMID:27529267
Wang, Chao; Li, Shuang; Li, Tao; Yu, Shanfa; Dai, Junming; Liu, Xiaoman; Zhu, Xiaojun; Ji, Yuqing; Wang, Jin
2016-08-12
This study aimed to identify the association between occupational stress and depression-well-being by proposing a comprehensive and flexible job burden-capital model with its corresponding hypotheses. For this research, 1618 valid samples were gathered from the electronic manufacturing service industry in Hunan Province, China; self-rated questionnaires were administered to participants for data collection after obtaining their written consent. The proposed model was fitted and tested through structural equation model analysis. Single-factor correlation analysis results indicated that coefficients between all items and dimensions had statistical significance. The final model demonstrated satisfactory global goodness of fit (CMIN/DF = 5.37, AGFI = 0.915, NNFI = 0.945, IFI = 0.952, RMSEA = 0.052). Both the measurement and structural models showed acceptable path loadings. Job burden and capital were directly associated with depression and well-being or indirectly related to them through personality. Multi-group structural equation model analyses indicated general applicability of the proposed model to basic features of such a population. Gender, marriage and education led to differences in the relation between occupational stress and health outcomes. The job burden-capital model of occupational stress-depression and well-being was found to be more systematic and comprehensive than previous models.
School-based clinics: their role in helping students meet the 1990 objectives.
Dryfoos, J G; Klerman, L V
1988-01-01
Service statistics and observations from site visits across the country indicate that school-based clinics (SBCs) may be having an impact on several of the problems targeted in the 1990 health objectives, including unplanned pregnancy and substance abuse. At least 120 junior and senior high schools in 61 communities are currently operating or developing clinics. Growth is attributed to increasing concern about high-risk youth, especially among educators in their roles of "surrogate parents"; to disillusion with categorical interventions and a movement toward more comprehensive services; and to student, parent, school, and community approval of the new programs. This article describes the comprehensive school-based clinic model, including its history, organizational strategies, school/community partnerships, and services.
Power-up: A Reanalysis of 'Power Failure' in Neuroscience Using Mixture Modeling
Wood, John
2017-01-01
Recently, evidence for endemically low statistical power has cast neuroscience findings into doubt. If low statistical power plagues neuroscience, then this reduces confidence in the reported effects. However, if statistical power is not uniformly low, then such blanket mistrust might not be warranted. Here, we provide a different perspective on this issue, analyzing data from an influential study reporting a median power of 21% across 49 meta-analyses (Button et al., 2013). We demonstrate, using Gaussian mixture modeling, that the sample of 730 studies included in that analysis comprises several subcomponents so the use of a single summary statistic is insufficient to characterize the nature of the distribution. We find that statistical power is extremely low for studies included in meta-analyses that reported a null result and that it varies substantially across subfields of neuroscience, with particularly low power in candidate gene association studies. Therefore, whereas power in neuroscience remains a critical issue, the notion that studies are systematically underpowered is not the full story: low power is far from a universal problem. SIGNIFICANCE STATEMENT Recently, researchers across the biomedical and psychological sciences have become concerned with the reliability of results. One marker for reliability is statistical power: the probability of finding a statistically significant result given that the effect exists. Previous evidence suggests that statistical power is low across the field of neuroscience. Our results present a more comprehensive picture of statistical power in neuroscience: on average, studies are indeed underpowered—some very seriously so—but many studies show acceptable or even exemplary statistical power. We show that this heterogeneity in statistical power is common across most subfields in neuroscience. This new, more nuanced picture of statistical power in neuroscience could affect not only scientific understanding, but potentially policy and funding decisions for neuroscience research. PMID:28706080
On Teaching about the Coefficient of Variation in Introductory Statistics Courses
ERIC Educational Resources Information Center
Trafimow, David
2014-01-01
The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.
Gregl, Ana; Kirigin, Marin; Bilać, Snjeiana; Sućeska Ligutić, Radojka; Jaksić, Nenad; Jakovljević, Miro
2014-09-01
This research aims to investigate differences in speech comprehension between children with specific language impairment (SLI) and their developmentally normal peers, and the relationship between speech comprehension and emotional/behavioral problems on Achenbach's Child Behavior Checklist (CBCL) and Caregiver Teacher's Report Form (C-TRF) according to the DSMIV The clinical sample comprised 97preschool children with SLI, while the peer sample comprised 60 developmentally normal preschool children. Children with SLI had significant delays in speech comprehension and more emotional/behavioral problems than peers. In children with SLI, speech comprehension significantly correlated with scores on Attention Deficit/Hyperactivity Problems (CBCL and C-TRF), and Pervasive Developmental Problems scales (CBCL)(p<0.05). In the peer sample, speech comprehension significantly correlated with scores on Affective Problems and Attention Deficit/Hyperactivity Problems (C-TRF) scales. Regression analysis showed that 12.8% of variance in speech comprehension is saturated with 5 CBCL variables, of which Attention Deficit/Hyperactivity (beta = -0.281) and Pervasive Developmental Problems (beta = -0.280) are statistically significant (p < 0.05). In the reduced regression model Attention Deficit/Hyperactivity explains 7.3% of the variance in speech comprehension, (beta = -0.270, p < 0.01). It is possible that, to a certain degree, the same neurodevelopmental process lies in the background of problems with speech comprehension, problems with attention and hyperactivity, and pervasive developmental problems. This study confirms the importance of triage for behavioral problems and attention training in the rehabilitation of children with SLI and children with normal language development that exhibit ADHD symptoms.
Nevada's Children: Selected Educational and Social Statistics. Nevada and National.
ERIC Educational Resources Information Center
Horner, Mary P., Comp.
This statistical report describes the successes and shortcomings of education in Nevada and compares some statistics concerning education in Nevada to national norms. The report, which provides a comprehensive array of information helpful to policy makers and citizens, is divided into three sections. The first section presents statistics about…
Statistics Report on TEQSA Registered Higher Education Providers
ERIC Educational Resources Information Center
Australian Government Tertiary Education Quality and Standards Agency, 2015
2015-01-01
This statistics report provides a comprehensive snapshot of national statistics on all parts of the sector for the year 2013, by bringing together data collected directly by TEQSA with data sourced from the main higher education statistics collections managed by the Australian Government Department of Education and Training. The report provides…
Bojan, Mirela; Gerelli, Sébastien; Gioanni, Simone; Pouard, Philippe; Vouhé, Pascal
2011-09-01
The Aristotle Comprehensive Complexity (ACC) and the Risk Adjustment in Congenital Heart Surgery (RACHS-1) scores have been proposed for complexity adjustment in the analysis of outcome after congenital heart surgery. Previous studies found RACHS-1 to be a better predictor of outcome than the Aristotle Basic Complexity score. We compared the ability to predict operative mortality and morbidity between ACC, the latest update of the Aristotle method and RACHS-1. Morbidity was assessed by length of intensive care unit stay. We retrospectively enrolled patients undergoing congenital heart surgery. We modeled each score as a continuous variable, mortality as a binary variable, and length of stay as a censored variable. We compared performance between mortality and morbidity models using likelihood ratio tests for nested models and paired concordance statistics. Among all 1,384 patients enrolled, 30-day mortality rate was 3.5% and median length of intensive care unit stay was 3 days. Both scores strongly related to mortality, but ACC made better prediction than RACHS-1; c-indexes 0.87 (0.84, 0.91) vs 0.75 (0.65, 0.82). Both scores related to overall length of stay only during the first postoperative week, but ACC made better predictions than RACHS-1; U statistic=0.22, p<0.001. No significant difference was noted after adjusting RACHS-1 models on age, prematurity, and major extracardiac abnormalities. The ACC was a better predictor of operative mortality and length of intensive care unit stay than RACHS-1. In order to achieve similar performance, regression models including RACHS-1 need to be further adjusted on age, prematurity, and major extracardiac abnormalities. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Sabel, Michael S; Rice, John D; Griffith, Kent A; Lowe, Lori; Wong, Sandra L; Chang, Alfred E; Johnson, Timothy M; Taylor, Jeremy M G
2012-01-01
To identify melanoma patients at sufficiently low risk of nodal metastases who could avoid sentinel lymph node biopsy (SLNB), several statistical models have been proposed based upon patient/tumor characteristics, including logistic regression, classification trees, random forests, and support vector machines. We sought to validate recently published models meant to predict sentinel node status. We queried our comprehensive, prospectively collected melanoma database for consecutive melanoma patients undergoing SLNB. Prediction values were estimated based upon four published models, calculating the same reported metrics: negative predictive value (NPV), rate of negative predictions (RNP), and false-negative rate (FNR). Logistic regression performed comparably with our data when considering NPV (89.4 versus 93.6%); however, the model's specificity was not high enough to significantly reduce the rate of biopsies (SLN reduction rate of 2.9%). When applied to our data, the classification tree produced NPV and reduction in biopsy rates that were lower (87.7 versus 94.1 and 29.8 versus 14.3, respectively). Two published models could not be applied to our data due to model complexity and the use of proprietary software. Published models meant to reduce the SLNB rate among patients with melanoma either underperformed when applied to our larger dataset, or could not be validated. Differences in selection criteria and histopathologic interpretation likely resulted in underperformance. Statistical predictive models must be developed in a clinically applicable manner to allow for both validation and ultimately clinical utility.
Methodology and application of combined watershed and ground-water models in Kansas
Sophocleous, M.; Perkins, S.P.
2000-01-01
Increased irrigation in Kansas and other regions during the last several decades has caused serious water depletion, making the development of comprehensive strategies and tools to resolve such problems increasingly important. This paper makes the case for an intermediate complexity, quasi-distributed, comprehensive, large-watershed model, which falls between the fully distributed, physically based hydrological modeling system of the type of the SHE model and the lumped, conceptual rainfall-runoff modeling system of the type of the Stanford watershed model. This is achieved by integrating the quasi-distributed watershed model SWAT with the fully-distributed ground-water model MODFLOW. The advantage of this approach is the appreciably smaller input data requirements and the use of readily available data (compared to the fully distributed, physically based models), the statistical handling of watershed heterogeneities by employing the hydrologic-response-unit concept, and the significantly increased flexibility in handling stream-aquifer interactions, distributed well withdrawals, and multiple land uses. The mechanics of integrating the component watershed and ground-water models are outlined, and three real-world management applications of the integrated model from Kansas are briefly presented. Three different aspects of the integrated model are emphasized: (1) management applications of a Decision Support System for the integrated model (Rattlesnake Creek subbasin); (2) alternative conceptual models of spatial heterogeneity related to the presence or absence of an underlying aquifer with shallow or deep water table (Lower Republican River basin); and (3) the general nature of the integrated model linkage by employing a watershed simulator other than SWAT (Wet Walnut Creek basin). These applications demonstrate the practicality and versatility of this relatively simple and conceptually clear approach, making public acceptance of the integrated watershed modeling system much easier. This approach also enhances model calibration and thus the reliability of model results. (C) 2000 Elsevier Science B.V.Increased irrigation in Kansas and other regions during the last several decades has caused serious water depletion, making the development of comprehensive strategies and tools to resolve such problems increasingly important. This paper makes the case for an intermediate complexity, quasi-distributed, comprehensive, large-watershed model, which falls between the fully distributed, physically based hydrological modeling system of the type of the SHE model and the lumped, conceptual rainfall-runoff modeling system of the type of the Stanford watershed model. This is achieved by integrating the quasi-distributed watershed model SWAT with the fully-distributed ground-water model MODFLOW. The advantage of this approach is the appreciably smaller input data requirements and the use of readily available data (compared to the fully distributed, physically based models), the statistical handling of watershed heterogeneities by employing the hydrologic-response-unit concept, and the significantly increased flexibility in handling stream-aquifer interactions, distributed well withdrawals, and multiple land uses. The mechanics of integrating the component watershed and ground-water models are outlined, and three real-world management applications of the integrated model from Kansas are briefly presented. Three different aspects of the integrated model are emphasized: (1) management applications of a Decision Support System for the integrated model (Rattlesnake Creek subbasin); (2) alternative conceptual models of spatial heterogeneity related to the presence or absence of an underlying aquifer with shallow or deep water table (Lower Republican River basin); and (3) the general nature of the integrated model linkage by employing a watershed simulator other than SWAT (Wet Walnut Creek basin). These applications demonstrate the practicality and ve
Iwasa, Hajime; Yoshida, Hideyo; Kim, Hunkyung; Yoshida, Yuko; Kwon, Jinhee; Sugiura, Miho; Furuna, Taketo; Suzuki, Takao
2007-06-01
Recent studies have revealed that there are critical differences between participants and non-participants in health examinations. The aim of this study was to examine mortality differences between participants and non-participants in a comprehensive health examination for prevention of geriatric syndromes among community-dwelling elderly people, using a three-year prospective cohort study. The study population included 854 adults aged 70 to 84 at baseline. The following items were all studied: the status of participation in the comprehensive health examination as an independent variable, age, gender, number of years of education, living alone, presence of chronic diseases, experience of falls over one year, history of hospitalization over one year, self-rated health, body mass index, instrumental activities of daily living, and subjective well-being as covariates; and all-cause mortality during a three-year follow-up as a dependent variable. In an adjusted Cox's proportional hazard regression model, the mortality risk for participants in the comprehensive health examination was significantly lower than that of non-participants (Risk Ratio (for participants)= 0.44, 95% confidence interval=0.24 to 0.78). The present study shows that there is a large mortality difference between participants and non-participants. Our findings suggest two possible interpretations: 1) There is a bias due to self-selection for participation in the trial, which was not eliminated by adjustment for the covariates in the statistical model; 2) There is an intervention effect associated with participation in the comprehensive health examination which reduces the mortality risk.
Scene-based nonuniformity correction and enhancement: pixel statistics and subpixel motion.
Zhao, Wenyi; Zhang, Chao
2008-07-01
We propose a framework for scene-based nonuniformity correction (NUC) and nonuniformity correction and enhancement (NUCE) that is required for focal-plane array-like sensors to obtain clean and enhanced-quality images. The core of the proposed framework is a novel registration-based nonuniformity correction super-resolution (NUCSR) method that is bootstrapped by statistical scene-based NUC methods. Based on a comprehensive imaging model and an accurate parametric motion estimation, we are able to remove severe/structured nonuniformity and in the presence of subpixel motion to simultaneously improve image resolution. One important feature of our NUCSR method is the adoption of a parametric motion model that allows us to (1) handle many practical scenarios where parametric motions are present and (2) carry out perfect super-resolution in principle by exploring available subpixel motions. Experiments with real data demonstrate the efficiency of the proposed NUCE framework and the effectiveness of the NUCSR method.
ERIC Educational Resources Information Center
McCulloch, Ryan Sterling
2017-01-01
The role of any statistics course is to increase the understanding and comprehension of statistical concepts and those goals can be achieved via both theoretical instruction and statistical software training. However, many introductory courses either forego advanced software usage, or leave its use to the student as a peripheral activity. The…
Soni, Kirti; Parmar, Kulwinder Singh; Kapoor, Sangeeta; Kumar, Nishant
2016-05-15
A lot of studies in the literature of Aerosol Optical Depth (AOD) done by using Moderate Resolution Imaging Spectroradiometer (MODIS) derived data, but the accuracy of satellite data in comparison to ground data derived from ARrosol Robotic NETwork (AERONET) has been always questionable. So to overcome from this situation, comparative study of a comprehensive ground based and satellite data for the period of 2001-2012 is modeled. The time series model is used for the accurate prediction of AOD and statistical variability is compared to assess the performance of the model in both cases. Root mean square error (RMSE), mean absolute percentage error (MAPE), stationary R-squared, R-squared, maximum absolute percentage error (MAPE), normalized Bayesian information criterion (NBIC) and Ljung-Box methods are used to check the applicability and validity of the developed ARIMA models revealing significant precision in the model performance. It was found that, it is possible to predict the AOD by statistical modeling using time series obtained from past data of MODIS and AERONET as input data. Moreover, the result shows that MODIS data can be formed from AERONET data by adding 0.251627 ± 0.133589 and vice-versa by subtracting. From the forecast available for AODs for the next four years (2013-2017) by using the developed ARIMA model, it is concluded that the forecasted ground AOD has increased trend. Copyright © 2016 Elsevier B.V. All rights reserved.
Sabel, Michael S.; Rice, John D.; Griffith, Kent A.; Lowe, Lori; Wong, Sandra L.; Chang, Alfred E.; Johnson, Timothy M.; Taylor, Jeremy M.G.
2013-01-01
Introduction To identify melanoma patients at sufficiently low risk of nodal metastases who could avoid SLN biopsy (SLNB). Several statistical models have been proposed based upon patient/tumor characteristics, including logistic regression, classification trees, random forests and support vector machines. We sought to validate recently published models meant to predict sentinel node status. Methods We queried our comprehensive, prospectively-collected melanoma database for consecutive melanoma patients undergoing SLNB. Prediction values were estimated based upon 4 published models, calculating the same reported metrics: negative predictive value (NPV), rate of negative predictions (RNP), and false negative rate (FNR). Results Logistic regression performed comparably with our data when considering NPV (89.4% vs. 93.6%); however the model’s specificity was not high enough to significantly reduce the rate of biopsies (SLN reduction rate of 2.9%). When applied to our data, the classification tree produced NPV and reduction in biopsies rates that were lower 87.7% vs. 94.1% and 29.8% vs. 14.3%, respectively. Two published models could not be applied to our data due to model complexity and the use of proprietary software. Conclusions Published models meant to reduce the SLNB rate among patients with melanoma either underperformed when applied to our larger dataset, or could not be validated. Differences in selection criteria and histopathologic interpretation likely resulted in underperformance. Development of statistical predictive models must be created in a clinically applicable manner to allow for both validation and ultimately clinical utility. PMID:21822550
A Bayesian Joint Model of Menstrual Cycle Length and Fecundity
Lum, Kirsten J.; Sundaram, Rajeshwari; Louis, Germaine M. Buck; Louis, Thomas A.
2015-01-01
Summary Menstrual cycle length (MCL) has been shown to play an important role in couple fecundity, which is the biologic capacity for reproduction irrespective of pregnancy intentions. However, a comprehensive assessment of its role requires a fecundity model that accounts for male and female attributes and the couple’s intercourse pattern relative to the ovulation day. To this end, we employ a Bayesian joint model for MCL and pregnancy. MCLs follow a scale multiplied (accelerated) mixture model with Gaussian and Gumbel components; the pregnancy model includes MCL as a covariate and computes the cycle-specific probability of pregnancy in a menstrual cycle conditional on the pattern of intercourse and no previous fertilization. Day-specific fertilization probability is modeled using natural, cubic splines. We analyze data from the Longitudinal Investigation of Fertility and the Environment Study (the LIFE Study), a couple based prospective pregnancy study, and find a statistically significant quadratic relation between fecundity and menstrual cycle length, after adjustment for intercourse pattern and other attributes, including male semen quality, both partner’s age, and active smoking status (determined by baseline cotinine level 100ng/mL). We compare results to those produced by a more basic model and show the advantages of a more comprehensive approach. PMID:26295923
[Dermatoglyphics in the prognostication of constitutional and physical traits in humans].
Mazur, E S; Sidorenko, A G
2009-01-01
The present study was designed to elucidate the relationship between palmar and digital dermatoglyphic patterns and descriptive signs of human appearance based on the results of comprehensive anthropometric examination of 2620 men and 380 women. A battery of different methods were used to statistically treat the results of dactyloscopic records. They demonstrated correlation between skin patterns and external body features that can be used to construct diagnostic models for the purpose of personality identification.
Multi-criterion model ensemble of CMIP5 surface air temperature over China
NASA Astrophysics Data System (ADS)
Yang, Tiantian; Tao, Yumeng; Li, Jingjing; Zhu, Qian; Su, Lu; He, Xiaojia; Zhang, Xiaoming
2018-05-01
The global circulation models (GCMs) are useful tools for simulating climate change, projecting future temperature changes, and therefore, supporting the preparation of national climate adaptation plans. However, different GCMs are not always in agreement with each other over various regions. The reason is that GCMs' configurations, module characteristics, and dynamic forcings vary from one to another. Model ensemble techniques are extensively used to post-process the outputs from GCMs and improve the variability of model outputs. Root-mean-square error (RMSE), correlation coefficient (CC, or R) and uncertainty are commonly used statistics for evaluating the performances of GCMs. However, the simultaneous achievements of all satisfactory statistics cannot be guaranteed in using many model ensemble techniques. In this paper, we propose a multi-model ensemble framework, using a state-of-art evolutionary multi-objective optimization algorithm (termed MOSPD), to evaluate different characteristics of ensemble candidates and to provide comprehensive trade-off information for different model ensemble solutions. A case study of optimizing the surface air temperature (SAT) ensemble solutions over different geographical regions of China is carried out. The data covers from the period of 1900 to 2100, and the projections of SAT are analyzed with regard to three different statistical indices (i.e., RMSE, CC, and uncertainty). Among the derived ensemble solutions, the trade-off information is further analyzed with a robust Pareto front with respect to different statistics. The comparison results over historical period (1900-2005) show that the optimized solutions are superior over that obtained simple model average, as well as any single GCM output. The improvements of statistics are varying for different climatic regions over China. Future projection (2006-2100) with the proposed ensemble method identifies that the largest (smallest) temperature changes will happen in the South Central China (the Inner Mongolia), the North Eastern China (the South Central China), and the North Western China (the South Central China), under RCP 2.6, RCP 4.5, and RCP 8.5 scenarios, respectively.
Bugana, Marco; Severi, Stefano; Sobie, Eric A.
2014-01-01
Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP). The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD) antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz) and fast (2 Hz) rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of AP duration rate dependence, and illustrates a strategy for the design of potentially beneficial antiarrhythmic drugs. PMID:24675446
Cummins, Megan A; Dalal, Pavan J; Bugana, Marco; Severi, Stefano; Sobie, Eric A
2014-03-01
Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP). The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD) antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz) and fast (2 Hz) rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of AP duration rate dependence, and illustrates a strategy for the design of potentially beneficial antiarrhythmic drugs.
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Maier, Holger R.; Wu, Wenyan; Dandy, Graeme C.; Gupta, Hoshin V.; Zhang, Tuqiao
2018-02-01
Hydrological models are used for a wide variety of engineering purposes, including streamflow forecasting and flood-risk estimation. To develop such models, it is common to allocate the available data to calibration and evaluation data subsets. Surprisingly, the issue of how this allocation can affect model evaluation performance has been largely ignored in the research literature. This paper discusses the evaluation performance bias that can arise from how available data are allocated to calibration and evaluation subsets. As a first step to assessing this issue in a statistically rigorous fashion, we present a comprehensive investigation of the influence of data allocation on the development of data-driven artificial neural network (ANN) models of streamflow. Four well-known formal data splitting methods are applied to 754 catchments from Australia and the U.S. to develop 902,483 ANN models. Results clearly show that the choice of the method used for data allocation has a significant impact on model performance, particularly for runoff data that are more highly skewed, highlighting the importance of considering the impact of data splitting when developing hydrological models. The statistical behavior of the data splitting methods investigated is discussed and guidance is offered on the selection of the most appropriate data splitting methods to achieve representative evaluation performance for streamflow data with different statistical properties. Although our results are obtained for data-driven models, they highlight the fact that this issue is likely to have a significant impact on all types of hydrological models, especially conceptual rainfall-runoff models.
Statistical modeling of optical attenuation measurements in continental fog conditions
NASA Astrophysics Data System (ADS)
Khan, Muhammad Saeed; Amin, Muhammad; Awan, Muhammad Saleem; Minhas, Abid Ali; Saleem, Jawad; Khan, Rahimdad
2017-03-01
Free-space optics is an innovative technology that uses atmosphere as a propagation medium to provide higher data rates. These links are heavily affected by atmospheric channel mainly because of fog and clouds that act to scatter and even block the modulated beam of light from reaching the receiver end, hence imposing severe attenuation. A comprehensive statistical study of the fog effects and deep physical understanding of the fog phenomena are very important for suggesting improvements (reliability and efficiency) in such communication systems. In this regard, 6-months real-time measured fog attenuation data are considered and statistically investigated. A detailed statistical analysis related to each fog event for that period is presented; the best probability density functions are selected on the basis of Akaike information criterion, while the estimates of unknown parameters are computed by maximum likelihood estimation technique. The results show that most fog attenuation events follow normal mixture distribution and some follow the Weibull distribution.
Moshtagh-Khorasani, Majid; Akbarzadeh-T, Mohammad-R; Jahangiri, Nader; Khoobdel, Mehdi
2009-01-01
BACKGROUND: Aphasia diagnosis is particularly challenging due to the linguistic uncertainty and vagueness, inconsistencies in the definition of aphasic syndromes, large number of measurements with imprecision, natural diversity and subjectivity in test objects as well as in opinions of experts who diagnose the disease. METHODS: Fuzzy probability is proposed here as the basic framework for handling the uncertainties in medical diagnosis and particularly aphasia diagnosis. To efficiently construct this fuzzy probabilistic mapping, statistical analysis is performed that constructs input membership functions as well as determines an effective set of input features. RESULTS: Considering the high sensitivity of performance measures to different distribution of testing/training sets, a statistical t-test of significance is applied to compare fuzzy approach results with NN results as well as author's earlier work using fuzzy logic. The proposed fuzzy probability estimator approach clearly provides better diagnosis for both classes of data sets. Specifically, for the first and second type of fuzzy probability classifiers, i.e. spontaneous speech and comprehensive model, P-values are 2.24E-08 and 0.0059, respectively, strongly rejecting the null hypothesis. CONCLUSIONS: The technique is applied and compared on both comprehensive and spontaneous speech test data for diagnosis of four Aphasia types: Anomic, Broca, Global and Wernicke. Statistical analysis confirms that the proposed approach can significantly improve accuracy using fewer Aphasia features. PMID:21772867
Modeling the Development of Audiovisual Cue Integration in Speech Perception
Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.
2017-01-01
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558
Modeling the Development of Audiovisual Cue Integration in Speech Perception.
Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C
2017-03-21
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.
The impacts of recent smoking control policies on individual smoking choice: the case of Japan
2013-01-01
Abstract This article comprehensively examines the impact of recent smoking control policies in Japan, increases in cigarette taxes and the enforcement of the Health Promotion Law, on individual smoking choice by using multi-year and nationwide individual survey data to overcome the analytical problems of previous Japanese studies. In the econometric analyses, I specify a simple binary choice model based on a random utility model to examine the effects of smoking control policies on individual smoking choice by employing the instrumental variable probit model to control for the endogeneity of cigarette prices. The empirical results show that an increase in cigarette prices statistically significantly reduces the smoking probability of males by 1.0 percent and that of females by 1.4 to 2.0 percent. The enforcement of the Health Promotion Law has a statistically significant effect on reducing the smoking probability of males by 15.2 percent and of females by 11.9 percent. Furthermore, an increase in cigarette prices has a statistically significant negative effect on the smoking probability of office workers, non-workers, male manual workers, and female unemployed people, and the enforcement of the Health Promotion Law has a statistically significant effect on decreasing the smoking probabilities of office workers, female manual workers, and male non-workers. JEL classification C25, C26, I18 PMID:23497490
Education Statistics Quarterly, Spring 2001.
ERIC Educational Resources Information Center
Education Statistics Quarterly, 2001
2001-01-01
The "Education Statistics Quarterly" gives a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data products and funding opportunities developed over a 3-month period. Each issue…
Statistical Exposé of a Multiple-Compartment Anaerobic Reactor Treating Domestic Wastewater.
Pfluger, Andrew R; Hahn, Martha J; Hering, Amanda S; Munakata-Marr, Junko; Figueroa, Linda
2018-06-01
Mainstream anaerobic treatment of domestic wastewater is a promising energy-generating treatment strategy; however, such reactors operated in colder regions are not well characterized. Performance data from a pilot-scale, multiple-compartment anaerobic reactor taken over 786 days were subjected to comprehensive statistical analyses. Results suggest that chemical oxygen demand (COD) was a poor proxy for organics in anaerobic systems as oxygen demand from dissolved inorganic material, dissolved methane, and colloidal material influence dissolved and particulate COD measurements. Additionally, univariate and functional boxplots were useful in visualizing variability in contaminant concentrations and identifying statistical outliers. Further, significantly different dissolved organic removal and methane production was observed between operational years, suggesting that anaerobic reactor systems may not achieve steady-state performance within one year. Last, modeling multiple-compartment reactor systems will require data collected over at least two years to capture seasonal variations of the major anaerobic microbial functions occurring within each reactor compartment.
Does money matter in inflation forecasting?
NASA Astrophysics Data System (ADS)
Binner, J. M.; Tino, P.; Tepper, J.; Anderson, R.; Jones, B.; Kendall, G.
2010-11-01
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two nonlinear techniques, namely, recurrent neural networks and kernel recursive least squares regression-techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naïve random walk model. The best models were nonlinear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. Beyond its economic findings, our study is in the tradition of physicists’ long-standing interest in the interconnections among statistical mechanics, neural networks, and related nonparametric statistical methods, and suggests potential avenues of extension for such studies.
A Comparison of the Achievement of Statistics Students Enrolled in Online and Face-to-Face Settings
ERIC Educational Resources Information Center
Christmann, Edwin P.
2017-01-01
This study compared the achievement of male and female students who were enrolled in an online univariate statistics course to students enrolled in a traditional face-to-face univariate statistics course. The subjects, 47 graduate students enrolled in univariate statistics classes at a public, comprehensive university, were randomly assigned to…
Trends in study design and the statistical methods employed in a leading general medicine journal.
Gosho, M; Sato, Y; Nagashima, K; Takahashi, S
2018-02-01
Study design and statistical methods have become core components of medical research, and the methodology has become more multifaceted and complicated over time. The study of the comprehensive details and current trends of study design and statistical methods is required to support the future implementation of well-planned clinical studies providing information about evidence-based medicine. Our purpose was to illustrate study design and statistical methods employed in recent medical literature. This was an extension study of Sato et al. (N Engl J Med 2017; 376: 1086-1087), which reviewed 238 articles published in 2015 in the New England Journal of Medicine (NEJM) and briefly summarized the statistical methods employed in NEJM. Using the same database, we performed a new investigation of the detailed trends in study design and individual statistical methods that were not reported in the Sato study. Due to the CONSORT statement, prespecification and justification of sample size are obligatory in planning intervention studies. Although standard survival methods (eg Kaplan-Meier estimator and Cox regression model) were most frequently applied, the Gray test and Fine-Gray proportional hazard model for considering competing risks were sometimes used for a more valid statistical inference. With respect to handling missing data, model-based methods, which are valid for missing-at-random data, were more frequently used than single imputation methods. These methods are not recommended as a primary analysis, but they have been applied in many clinical trials. Group sequential design with interim analyses was one of the standard designs, and novel design, such as adaptive dose selection and sample size re-estimation, was sometimes employed in NEJM. Model-based approaches for handling missing data should replace single imputation methods for primary analysis in the light of the information found in some publications. Use of adaptive design with interim analyses is increasing after the presentation of the FDA guidance for adaptive design. © 2017 John Wiley & Sons Ltd.
Modeling Area-Level Health Rankings.
Courtemanche, Charles; Soneji, Samir; Tchernis, Rusty
2015-10-01
Rank county health using a Bayesian factor analysis model. Secondary county data from the National Center for Health Statistics (through 2007) and Behavioral Risk Factor Surveillance System (through 2009). Our model builds on the existing county health rankings (CHRs) by using data-derived weights to compute ranks from mortality and morbidity variables, and by quantifying uncertainty based on population, spatial correlation, and missing data. We apply our model to Wisconsin, which has comprehensive data, and Texas, which has substantial missing information. The data were downloaded from www.countyhealthrankings.org. Our estimated rankings are more similar to the CHRs for Wisconsin than Texas, as the data-derived factor weights are closer to the assigned weights for Wisconsin. The correlations between the CHRs and our ranks are 0.89 for Wisconsin and 0.65 for Texas. Uncertainty is especially severe for Texas given the state's substantial missing data. The reliability of comprehensive CHRs varies from state to state. We advise focusing on the counties that remain among the least healthy after incorporating alternate weighting methods and accounting for uncertainty. Our results also highlight the need for broader geographic coverage in health data. © Health Research and Educational Trust.
Uncertainty-based Optimization Algorithms in Designing Fractionated Spacecraft
Ning, Xin; Yuan, Jianping; Yue, Xiaokui
2016-01-01
A fractionated spacecraft is an innovative application of a distributive space system. To fully understand the impact of various uncertainties on its development, launch and in-orbit operation, we use the stochastic missioncycle cost to comprehensively evaluate the survivability, flexibility, reliability and economy of the ways of dividing the various modules of the different configurations of fractionated spacecraft. We systematically describe its concept and then analyze its evaluation and optimal design method that exists during recent years and propose the stochastic missioncycle cost for comprehensive evaluation. We also establish the models of the costs such as module development, launch and deployment and the impacts of their uncertainties respectively. Finally, we carry out the Monte Carlo simulation of the complete missioncycle costs of various configurations of the fractionated spacecraft under various uncertainties and give and compare the probability density distribution and statistical characteristics of its stochastic missioncycle cost, using the two strategies of timing module replacement and non-timing module replacement. The simulation results verify the effectiveness of the comprehensive evaluation method and show that our evaluation method can comprehensively evaluate the adaptability of the fractionated spacecraft under different technical and mission conditions. PMID:26964755
McDonough, Christine M.; Jette, Alan M.; Ni, Pengsheng; Bogusz, Kara; Marfeo, Elizabeth E; Brandt, Diane E; Chan, Leighton; Meterko, Mark; Haley, Stephen M.; Rasch, Elizabeth K.
2014-01-01
Objectives To build a comprehensive item pool representing work-relevant physical functioning and to test the factor structure of the item pool. These developmental steps represent initial outcomes of a broader project to develop instruments for the assessment of function within the context of Social Security Administration (SSA) disability programs. Design Comprehensive literature review; gap analysis; item generation with expert panel input; stakeholder interviews; cognitive interviews; cross-sectional survey administration; and exploratory and confirmatory factor analyses to assess item pool structure. Setting In-person and semi-structured interviews; internet and telephone surveys. Participants A sample of 1,017 SSA claimants, and a normative sample of 999 adults from the US general population. Interventions Not Applicable. Main Outcome Measure Model fit statistics Results The final item pool consisted of 139 items. Within the claimant sample 58.7% were white; 31.8% were black; 46.6% were female; and the mean age was 49.7 years. Initial factor analyses revealed a 4-factor solution which included more items and allowed separate characterization of: 1) Changing and Maintaining Body Position, 2) Whole Body Mobility, 3) Upper Body Function and 4) Upper Extremity Fine Motor. The final 4-factor model included 91 items. Confirmatory factor analyses for the 4-factor models for the claimant and the normative samples demonstrated very good fit. Fit statistics for claimant and normative samples respectively were: Comparative Fit Index = 0.93 and 0.98; Tucker-Lewis Index = 0.92 and 0.98; Root Mean Square Error Approximation = 0.05 and 0.04. Conclusions The factor structure of the Physical Function item pool closely resembled the hypothesized content model. The four scales relevant to work activities offer promise for providing reliable information about claimant physical functioning relevant to work disability. PMID:23542402
McDonough, Christine M; Jette, Alan M; Ni, Pengsheng; Bogusz, Kara; Marfeo, Elizabeth E; Brandt, Diane E; Chan, Leighton; Meterko, Mark; Haley, Stephen M; Rasch, Elizabeth K
2013-09-01
To build a comprehensive item pool representing work-relevant physical functioning and to test the factor structure of the item pool. These developmental steps represent initial outcomes of a broader project to develop instruments for the assessment of function within the context of Social Security Administration (SSA) disability programs. Comprehensive literature review; gap analysis; item generation with expert panel input; stakeholder interviews; cognitive interviews; cross-sectional survey administration; and exploratory and confirmatory factor analyses to assess item pool structure. In-person and semistructured interviews and Internet and telephone surveys. Sample of SSA claimants (n=1017) and a normative sample of adults from the U.S. general population (n=999). Not applicable. Model fit statistics. The final item pool consisted of 139 items. Within the claimant sample, 58.7% were white; 31.8% were black; 46.6% were women; and the mean age was 49.7 years. Initial factor analyses revealed a 4-factor solution, which included more items and allowed separate characterization of: (1) changing and maintaining body position, (2) whole body mobility, (3) upper body function, and (4) upper extremity fine motor. The final 4-factor model included 91 items. Confirmatory factor analyses for the 4-factor models for the claimant and the normative samples demonstrated very good fit. Fit statistics for claimant and normative samples, respectively, were: Comparative Fit Index=.93 and .98; Tucker-Lewis Index=.92 and .98; and root mean square error approximation=.05 and .04. The factor structure of the physical function item pool closely resembled the hypothesized content model. The 4 scales relevant to work activities offer promise for providing reliable information about claimant physical functioning relevant to work disability. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
USING STATISTICAL METHODS FOR WATER QUALITY MANAGEMENT: ISSUES, PROBLEMS AND SOLUTIONS
This book is readable, comprehensible and I anticipate, usable. The author has an enthusiasm which comes out in the text. Statistics is presented as a living breathing subject, still being debated, defined, and refined. This statistics book actually has examples in the field...
Graph-based structural change detection for rotating machinery monitoring
NASA Astrophysics Data System (ADS)
Lu, Guoliang; Liu, Jie; Yan, Peng
2018-01-01
Detection of structural changes is critically important in operational monitoring of a rotating machine. This paper presents a novel framework for this purpose, where a graph model for data modeling is adopted to represent/capture statistical dynamics in machine operations. Meanwhile we develop a numerical method for computing temporal anomalies in the constructed graphs. The martingale-test method is employed for the change detection when making decisions on possible structural changes, where excellent performance is demonstrated outperforming exciting results such as the autoregressive-integrated-moving average (ARIMA) model. Comprehensive experimental results indicate good potentials of the proposed algorithm in various engineering applications. This work is an extension of a recent result (Lu et al., 2017).
Implications of convection in the moon and the terrestrial planets
NASA Technical Reports Server (NTRS)
Turcotte, Donald L.
1991-01-01
A comprehensive review is made of the thermal chemical evolution of the moon and the terrestrial planets. New results are presented which were obtained for Venus by the Magellan Mission the efforts were concentrated on this planet. Alternative models were examined for the thermal structure of the lithosphere of Venus. The statistical distribution was studied of the locations of the coronae on Venus. Models were examined for the patterns of faulting around the coronae on Venus. A series was considered of viscous models for the development and relaxation of elevation anomalies on Venus. And rates were studied of solidification of volcanic flows on Venus. Both radiative and convective heat transfer were considered.
MPTinR: analysis of multinomial processing tree models in R.
Singmann, Henrik; Kellen, David
2013-06-01
We introduce MPTinR, a software package developed for the analysis of multinomial processing tree (MPT) models. MPT models represent a prominent class of cognitive measurement models for categorical data with applications in a wide variety of fields. MPTinR is the first software for the analysis of MPT models in the statistical programming language R, providing a modeling framework that is more flexible than standalone software packages. MPTinR also introduces important features such as (1) the ability to calculate the Fisher information approximation measure of model complexity for MPT models, (2) the ability to fit models for categorical data outside the MPT model class, such as signal detection models, (3) a function for model selection across a set of nested and nonnested candidate models (using several model selection indices), and (4) multicore fitting. MPTinR is available from the Comprehensive R Archive Network at http://cran.r-project.org/web/packages/MPTinR/ .
Statistical Learning Analysis in Neuroscience: Aiming for Transparency
Hanke, Michael; Halchenko, Yaroslav O.; Haxby, James V.; Pollmann, Stefan
2009-01-01
Encouraged by a rise of reciprocal interest between the machine learning and neuroscience communities, several recent studies have demonstrated the explanatory power of statistical learning techniques for the analysis of neural data. In order to facilitate a wider adoption of these methods, neuroscientific research needs to ensure a maximum of transparency to allow for comprehensive evaluation of the employed procedures. We argue that such transparency requires “neuroscience-aware” technology for the performance of multivariate pattern analyses of neural data that can be documented in a comprehensive, yet comprehensible way. Recently, we introduced PyMVPA, a specialized Python framework for machine learning based data analysis that addresses this demand. Here, we review its features and applicability to various neural data modalities. PMID:20582270
Assessment of NDE reliability data
NASA Technical Reports Server (NTRS)
Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.
1975-01-01
Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.
Chinese Obstetrics & Gynecology journal club: a randomised controlled trial
Tsui, Ilene K; Dodson, William C; Kunselman, Allen R; Kuang, Hongying; Han, Feng-Juan; Legro, Richard S; Wu, Xiao-Ke
2016-01-01
Objectives To assess whether a journal club model could improve comprehension and written and spoken medical English in a population of Chinese medical professionals. Setting and participants The study population consisted of 52 medical professionals who were residents or postgraduate master or PhD students in the Department of Obstetrics and Gynecology, Heilongjiang University of Chinese Medicine, China. Intervention After a three-part baseline examination to assess medical English comprehension, participants were randomised to either (1) an intensive journal club treatment arm or (2) a self-study group. At the conclusion of the 8-week intervention participants (n=52) were re-tested with new questions. Outcome measures The primary outcome was the change in score on a multiple choice examination. Secondary outcomes included change in scores on written and oral examinations which were modelled on the Test of English as a Foreign Language (TOEFL). Results Both groups had improved scores on the multiple choice examination without a statistically significant difference between them (90% power). However, there was a statistically significant difference between the groups in mean improvement in scores for both written (95% CI 1.1 to 5.0; p=0.003) and spoken English (95% CI 0.06 to 3.7; p=0.04) favouring the journal club intervention. Conclusions Interacting with colleagues and an English-speaking facilitator in a journal club improved both written and spoken medical English in Chinese medical professionals. Journal clubs may be suitable for use as a self-sustainable teaching model to improve fluency in medical English in foreign medical professionals. Trial registration number NCT01844609. PMID:26823180
An R package for analyzing and modeling ranking data
2013-01-01
Background In medical informatics, psychology, market research and many other fields, researchers often need to analyze and model ranking data. However, there is no statistical software that provides tools for the comprehensive analysis of ranking data. Here, we present pmr, an R package for analyzing and modeling ranking data with a bundle of tools. The pmr package enables descriptive statistics (mean rank, pairwise frequencies, and marginal matrix), Analytic Hierarchy Process models (with Saaty’s and Koczkodaj’s inconsistencies), probability models (Luce model, distance-based model, and rank-ordered logit model), and the visualization of ranking data with multidimensional preference analysis. Results Examples of the use of package pmr are given using a real ranking dataset from medical informatics, in which 566 Hong Kong physicians ranked the top five incentives (1: competitive pressures; 2: increased savings; 3: government regulation; 4: improved efficiency; 5: improved quality care; 6: patient demand; 7: financial incentives) to the computerization of clinical practice. The mean rank showed that item 4 is the most preferred item and item 3 is the least preferred item, and significance difference was found between physicians’ preferences with respect to their monthly income. A multidimensional preference analysis identified two dimensions that explain 42% of the total variance. The first can be interpreted as the overall preference of the seven items (labeled as “internal/external”), and the second dimension can be interpreted as their overall variance of (labeled as “push/pull factors”). Various statistical models were fitted, and the best were found to be weighted distance-based models with Spearman’s footrule distance. Conclusions In this paper, we presented the R package pmr, the first package for analyzing and modeling ranking data. The package provides insight to users through descriptive statistics of ranking data. Users can also visualize ranking data by applying a thought multidimensional preference analysis. Various probability models for ranking data are also included, allowing users to choose that which is most suitable to their specific situations. PMID:23672645
An R package for analyzing and modeling ranking data.
Lee, Paul H; Yu, Philip L H
2013-05-14
In medical informatics, psychology, market research and many other fields, researchers often need to analyze and model ranking data. However, there is no statistical software that provides tools for the comprehensive analysis of ranking data. Here, we present pmr, an R package for analyzing and modeling ranking data with a bundle of tools. The pmr package enables descriptive statistics (mean rank, pairwise frequencies, and marginal matrix), Analytic Hierarchy Process models (with Saaty's and Koczkodaj's inconsistencies), probability models (Luce model, distance-based model, and rank-ordered logit model), and the visualization of ranking data with multidimensional preference analysis. Examples of the use of package pmr are given using a real ranking dataset from medical informatics, in which 566 Hong Kong physicians ranked the top five incentives (1: competitive pressures; 2: increased savings; 3: government regulation; 4: improved efficiency; 5: improved quality care; 6: patient demand; 7: financial incentives) to the computerization of clinical practice. The mean rank showed that item 4 is the most preferred item and item 3 is the least preferred item, and significance difference was found between physicians' preferences with respect to their monthly income. A multidimensional preference analysis identified two dimensions that explain 42% of the total variance. The first can be interpreted as the overall preference of the seven items (labeled as "internal/external"), and the second dimension can be interpreted as their overall variance of (labeled as "push/pull factors"). Various statistical models were fitted, and the best were found to be weighted distance-based models with Spearman's footrule distance. In this paper, we presented the R package pmr, the first package for analyzing and modeling ranking data. The package provides insight to users through descriptive statistics of ranking data. Users can also visualize ranking data by applying a thought multidimensional preference analysis. Various probability models for ranking data are also included, allowing users to choose that which is most suitable to their specific situations.
Drug target inference through pathway analysis of genomics data
Ma, Haisu; Zhao, Hongyu
2013-01-01
Statistical modeling coupled with bioinformatics is commonly used for drug discovery. Although there exist many approaches for single target based drug design and target inference, recent years have seen a paradigm shift to system-level pharmacological research. Pathway analysis of genomics data represents one promising direction for computational inference of drug targets. This article aims at providing a comprehensive review on the evolving issues is this field, covering methodological developments, their pros and cons, as well as future research directions. PMID:23369829
Enhancing seasonal climate prediction capacity for the Pacific countries
NASA Astrophysics Data System (ADS)
Kuleshov, Y.; Jones, D.; Hendon, H.; Charles, A.; Cottrill, A.; Lim, E.-P.; Langford, S.; de Wit, R.; Shelton, K.
2012-04-01
Seasonal and inter-annual climate variability is a major factor in determining the vulnerability of many Pacific Island Countries to climate change and there is need to improve weekly to seasonal range climate prediction capabilities beyond what is currently available from statistical models. In the seasonal climate prediction project under the Australian Government's Pacific Adaptation Strategy Assistance Program (PASAP), we describe a comprehensive project to strengthen the climate prediction capacities in National Meteorological Services in 14 Pacific Island Countries and East Timor. The intent is particularly to reduce the vulnerability of current services to a changing climate, and improve the overall level of information available assist with managing climate variability. Statistical models cannot account for aspects of climate variability and change that are not represented in the historical record. In contrast, dynamical physics-based models implicitly include the effects of a changing climate whatever its character or cause and can predict outcomes not seen previously. The transition from a statistical to a dynamical prediction system provides more valuable and applicable climate information to a wide range of climate sensitive sectors throughout the countries of the Pacific region. In this project, we have developed seasonal climate outlooks which are based upon the current dynamical model POAMA (Predictive Ocean-Atmosphere Model for Australia) seasonal forecast system. At present, meteorological services of the Pacific Island Countries largely employ statistical models for seasonal outlooks. Outcomes of the PASAP project enhanced capabilities of the Pacific Island Countries in seasonal prediction providing National Meteorological Services with an additional tool to analyse meteorological variables such as sea surface temperatures, air temperature, pressure and rainfall using POAMA outputs and prepare more accurate seasonal climate outlooks.
Chi-Square Statistics, Tests of Hypothesis and Technology.
ERIC Educational Resources Information Center
Rochowicz, John A.
The use of technology such as computers and programmable calculators enables students to find p-values and conduct tests of hypotheses in many different ways. Comprehension and interpretation of a research problem become the focus for statistical analysis. This paper describes how to calculate chisquare statistics and p-values for statistical…
NASA Astrophysics Data System (ADS)
Ashe, E.; Kopp, R. E.; Khan, N.; Horton, B.; Engelhart, S. E.
2016-12-01
Sea level varies over of both space and time. Prior to the instrumental period, the sea-level record depends upon geological reconstructions that contain vertical and temporal uncertainty. Spatio-temporal statistical models enable the interpretation of RSL and rates of change as well as the reconstruction of the entire sea-level field from such noisy data. Hierarchical models explicitly distinguish between a process level, which characterizes the spatio-temporal field, and a data level, by which sparse proxy data and its noise is recorded. A hyperparameter level depicts prior expectations about the structure of variability in the spatio-temporal field. Spatio-temporal hierarchical models are amenable to several analysis approaches, with tradeoffs regarding computational efficiency and comprehensiveness of uncertainty characterization. A fully-Bayesian hierarchical model (BHM), which places prior probability distributions upon the hyperparameters, is more computationally intensive than an empirical hierarchical model (EHM), which uses point estimates of hyperparameters, derived from the data [1]. Here, we assess the sensitivity of posterior estimates of relative sea level (RSL) and rates to different statistical approaches by varying prior assumptions about the spatial and temporal structure of sea-level variability and applying multiple analytical approaches to Holocene sea-level proxies along the Atlantic coast of North American and the Caribbean [2]. References: 1. N Cressie, Wikle CK (2011) Statistics for spatio-temporal data (John Wiley & Sons). 2. Kahn N et al. (2016). Quaternary Science Reviews (in revision).
Moral Virtue and Practical Wisdom: Theme Comprehension in Children, Youth and Adults
Narvaez, Darcia; Gleason, Tracy; Mitchell, Christyan
2010-01-01
Three hypotheses were tested about the relation of moral comprehension to prudential comprehension by contrasting comprehension of themes in moral stories with comprehension of themes in prudential stories among third grade, fifth grade and college students (n = 168) in Study 1, and among college students, young and middle aged adults, and older adults (n = 96) in Study 2. In both studies, all groups were statistically significantly better at moral theme comprehension than prudential theme comprehension, suggesting that moral comprehension may develop prior to prudential comprehension. In Study 2, all groups performed equally on moral theme generation whereas both adult groups were significantly better than college students on prudential theme generation. Overall, the findings of these studies provide modest evidence that moral and prudential comprehension each develop separately, and that the latter may develop more slowly. PMID:21171549
Transportation statistics annual report 2000
DOT National Transportation Integrated Search
2001-01-01
The Transportation Statistics Annual Report (TSAR) is a Congressionally mandated publication with wide distribution. The TSAR provides the most comprehensive overview of U.S. transportation that is done on an annual basis. TSAR examines the extent of...
2017-01-01
Cytochrome P450 aromatase (CYP19A1) plays a key role in the development of estrogen dependent breast cancer, and aromatase inhibitors have been at the front line of treatment for the past three decades. The development of potent, selective and safer inhibitors is ongoing with in silico screening methods playing a more prominent role in the search for promising lead compounds in bioactivity-relevant chemical space. Here we present a set of comprehensive binding affinity prediction models for CYP19A1 using our automated Linear Interaction Energy (LIE) based workflow on a set of 132 putative and structurally diverse aromatase inhibitors obtained from a typical industrial screening study. We extended the workflow with machine learning methods to automatically cluster training and test compounds in order to maximize the number of explained compounds in one or more predictive LIE models. The method uses protein–ligand interaction profiles obtained from Molecular Dynamics (MD) trajectories to help model search and define the applicability domain of the resolved models. Our method was successful in accounting for 86% of the data set in 3 robust models that show high correlation between calculated and observed values for ligand-binding free energies (RMSE < 2.5 kJ mol–1), with good cross-validation statistics. PMID:28776988
Reaction rates for mesoscopic reaction-diffusion kinetics
Hellander, Stefan; Hellander, Andreas; Petzold, Linda
2015-02-23
The mesoscopic reaction-diffusion master equation (RDME) is a popular modeling framework frequently applied to stochastic reaction-diffusion kinetics in systems biology. The RDME is derived from assumptions about the underlying physical properties of the system, and it may produce unphysical results for models where those assumptions fail. In that case, other more comprehensive models are better suited, such as hard-sphere Brownian dynamics (BD). Although the RDME is a model in its own right, and not inferred from any specific microscale model, it proves useful to attempt to approximate a microscale model by a specific choice of mesoscopic reaction rates. In thismore » paper we derive mesoscopic scale-dependent reaction rates by matching certain statistics of the RDME solution to statistics of the solution of a widely used microscopic BD model: the Smoluchowski model with a Robin boundary condition at the reaction radius of two molecules. We also establish fundamental limits on the range of mesh resolutions for which this approach yields accurate results and show both theoretically and in numerical examples that as we approach the lower fundamental limit, the mesoscopic dynamics approach the microscopic dynamics. Finally, we show that for mesh sizes below the fundamental lower limit, results are less accurate. Thus, the lower limit determines the mesh size for which we obtain the most accurate results.« less
Reaction rates for mesoscopic reaction-diffusion kinetics
Hellander, Stefan; Hellander, Andreas; Petzold, Linda
2016-01-01
The mesoscopic reaction-diffusion master equation (RDME) is a popular modeling framework frequently applied to stochastic reaction-diffusion kinetics in systems biology. The RDME is derived from assumptions about the underlying physical properties of the system, and it may produce unphysical results for models where those assumptions fail. In that case, other more comprehensive models are better suited, such as hard-sphere Brownian dynamics (BD). Although the RDME is a model in its own right, and not inferred from any specific microscale model, it proves useful to attempt to approximate a microscale model by a specific choice of mesoscopic reaction rates. In this paper we derive mesoscopic scale-dependent reaction rates by matching certain statistics of the RDME solution to statistics of the solution of a widely used microscopic BD model: the Smoluchowski model with a Robin boundary condition at the reaction radius of two molecules. We also establish fundamental limits on the range of mesh resolutions for which this approach yields accurate results and show both theoretically and in numerical examples that as we approach the lower fundamental limit, the mesoscopic dynamics approach the microscopic dynamics. We show that for mesh sizes below the fundamental lower limit, results are less accurate. Thus, the lower limit determines the mesh size for which we obtain the most accurate results. PMID:25768640
Probabilistic arithmetic automata and their applications.
Marschall, Tobias; Herms, Inke; Kaltenbach, Hans-Michael; Rahmann, Sven
2012-01-01
We present a comprehensive review on probabilistic arithmetic automata (PAAs), a general model to describe chains of operations whose operands depend on chance, along with two algorithms to numerically compute the distribution of the results of such probabilistic calculations. PAAs provide a unifying framework to approach many problems arising in computational biology and elsewhere. We present five different applications, namely 1) pattern matching statistics on random texts, including the computation of the distribution of occurrence counts, waiting times, and clump sizes under hidden Markov background models; 2) exact analysis of window-based pattern matching algorithms; 3) sensitivity of filtration seeds used to detect candidate sequence alignments; 4) length and mass statistics of peptide fragments resulting from enzymatic cleavage reactions; and 5) read length statistics of 454 and IonTorrent sequencing reads. The diversity of these applications indicates the flexibility and unifying character of the presented framework. While the construction of a PAA depends on the particular application, we single out a frequently applicable construction method: We introduce deterministic arithmetic automata (DAAs) to model deterministic calculations on sequences, and demonstrate how to construct a PAA from a given DAA and a finite-memory random text model. This procedure is used for all five discussed applications and greatly simplifies the construction of PAAs. Implementations are available as part of the MoSDi package. Its application programming interface facilitates the rapid development of new applications based on the PAA framework.
Power-up: A Reanalysis of 'Power Failure' in Neuroscience Using Mixture Modeling.
Nord, Camilla L; Valton, Vincent; Wood, John; Roiser, Jonathan P
2017-08-23
Recently, evidence for endemically low statistical power has cast neuroscience findings into doubt. If low statistical power plagues neuroscience, then this reduces confidence in the reported effects. However, if statistical power is not uniformly low, then such blanket mistrust might not be warranted. Here, we provide a different perspective on this issue, analyzing data from an influential study reporting a median power of 21% across 49 meta-analyses (Button et al., 2013). We demonstrate, using Gaussian mixture modeling, that the sample of 730 studies included in that analysis comprises several subcomponents so the use of a single summary statistic is insufficient to characterize the nature of the distribution. We find that statistical power is extremely low for studies included in meta-analyses that reported a null result and that it varies substantially across subfields of neuroscience, with particularly low power in candidate gene association studies. Therefore, whereas power in neuroscience remains a critical issue, the notion that studies are systematically underpowered is not the full story: low power is far from a universal problem. SIGNIFICANCE STATEMENT Recently, researchers across the biomedical and psychological sciences have become concerned with the reliability of results. One marker for reliability is statistical power: the probability of finding a statistically significant result given that the effect exists. Previous evidence suggests that statistical power is low across the field of neuroscience. Our results present a more comprehensive picture of statistical power in neuroscience: on average, studies are indeed underpowered-some very seriously so-but many studies show acceptable or even exemplary statistical power. We show that this heterogeneity in statistical power is common across most subfields in neuroscience. This new, more nuanced picture of statistical power in neuroscience could affect not only scientific understanding, but potentially policy and funding decisions for neuroscience research. Copyright © 2017 Nord, Valton et al.
South Carolina Higher Education Statistical Abstract, 2014. 36th Edition
ERIC Educational Resources Information Center
Armour, Mim, Ed.
2014-01-01
The South Carolina Higher Education Statistical Abstract is a comprehensive, single-source compilation of tables and graphs which report data frequently requested by the Governor, Legislators, college and university staff, other state government officials, and the general public. The 2014 edition of the Statistical Abstract marks the 36th year of…
South Carolina Higher Education Statistical Abstract, 2015. 37th Edition
ERIC Educational Resources Information Center
Armour, Mim, Ed.
2015-01-01
The South Carolina Higher Education Statistical Abstract is a comprehensive, single-source compilation of tables and graphs which report data frequently requested by the Governor, Legislators, college and university staff, other state government officials, and the general public. The 2015 edition of the Statistical Abstract marks the 37th year of…
Statistical Handbook on Consumption and Wealth in the United States.
ERIC Educational Resources Information Center
Kaul, Chandrika, Ed.; Tomaselli-Moschovitis, Valerie, Ed.
This easy-to-use statistical handbook features the most up-to-date and comprehensive data related to U.S. wealth and consumer spending patterns. More than 300 statistical tables and charts are organized into 8 detailed sections. Intended for students, teachers, and general users, the handbook contains these sections: (1) "General Economic…
Erguler, Kamil; Stumpf, Michael P H
2011-05-01
The size and complexity of cellular systems make building predictive models an extremely difficult task. In principle dynamical time-course data can be used to elucidate the structure of the underlying molecular mechanisms, but a central and recurring problem is that many and very different models can be fitted to experimental data, especially when the latter are limited and subject to noise. Even given a model, estimating its parameters remains challenging in real-world systems. Here we present a comprehensive analysis of 180 systems biology models, which allows us to classify the parameters with respect to their contribution to the overall dynamical behaviour of the different systems. Our results reveal candidate elements of control in biochemical pathways that differentially contribute to dynamics. We introduce sensitivity profiles that concisely characterize parameter sensitivity and demonstrate how this can be connected to variability in data. Systematically linking data and model sloppiness allows us to extract features of dynamical systems that determine how well parameters can be estimated from time-course measurements, and associates the extent of data required for parameter inference with the model structure, and also with the global dynamical state of the system. The comprehensive analysis of so many systems biology models reaffirms the inability to estimate precisely most model or kinetic parameters as a generic feature of dynamical systems, and provides safe guidelines for performing better inferences and model predictions in the context of reverse engineering of mathematical models for biological systems.
McClintock, Martha K; Dale, William; Laumann, Edward O; Waite, Linda
2016-05-31
The World Health Organization (WHO) defines health as a "state of complete physical, mental and social well-being and not merely the absence of disease or infirmity." Despite general acceptance of this comprehensive definition, there has been little rigorous scientific attempt to use it to measure and assess population health. Instead, the dominant model of health is a disease-centered Medical Model (MM), which actively ignores many relevant domains. In contrast to the MM, we approach this issue through a Comprehensive Model (CM) of health consistent with the WHO definition, giving statistically equal consideration to multiple health domains, including medical, physical, psychological, functional, and sensory measures. We apply a data-driven latent class analysis (LCA) to model 54 specific health variables from the National Social Life, Health, and Aging Project (NSHAP), a nationally representative sample of US community-dwelling older adults. We first apply the LCA to the MM, identifying five health classes differentiated primarily by having diabetes and hypertension. The CM identifies a broader range of six health classes, including two "emergent" classes completely obscured by the MM. We find that specific medical diagnoses (cancer and hypertension) and health behaviors (smoking) are far less important than mental health (loneliness), sensory function (hearing), mobility, and bone fractures in defining vulnerable health classes. Although the MM places two-thirds of the US population into "robust health" classes, the CM reveals that one-half belong to less healthy classes, independently associated with higher mortality. This reconceptualization has important implications for medical care delivery, preventive health practices, and resource allocation.
Novotná, H; Kmiecik, O; Gałązka, M; Krtková, V; Hurajová, A; Schulzová, V; Hallmann, E; Rembiałkowska, E; Hajšlová, J
2012-01-01
The rapidly growing demand for organic food requires the availability of analytical tools enabling their authentication. Recently, metabolomic fingerprinting/profiling has been demonstrated as a challenging option for a comprehensive characterisation of small molecules occurring in plants, since their pattern may reflect the impact of various external factors. In a two-year pilot study, concerned with the classification of organic versus conventional crops, ambient mass spectrometry consisting of a direct analysis in real time (DART) ion source and a time-of-flight mass spectrometer (TOFMS) was employed. This novel methodology was tested on 40 tomato and 24 pepper samples grown under specified conditions. To calculate statistical models, the obtained data (mass spectra) were processed by the principal component analysis (PCA) followed by linear discriminant analysis (LDA). The results from the positive ionisation mode enabled better differentiation between organic and conventional samples than the results from the negative mode. In this case, the recognition ability obtained by LDA was 97.5% for tomato and 100% for pepper samples and the prediction abilities were above 80% for both sample sets. The results suggest that the year of production had stronger influence on the metabolomic fingerprints compared with the type of farming (organic versus conventional). In any case, DART-TOFMS is a promising tool for rapid screening of samples. Establishing comprehensive (multi-sample) long-term databases may further help to improve the quality of statistical classification models.
Karzmark, Peter; Deutsch, Gayle K
2018-01-01
This investigation was designed to determine the predictive accuracy of a comprehensive neuropsychological and brief neuropsychological test battery with regard to the capacity to perform instrumental activities of daily living (IADLs). Accuracy statistics that included measures of sensitivity, specificity, positive and negative predicted power and positive likelihood ratio were calculated for both types of batteries. The sample was drawn from a general neurological group of adults (n = 117) that included a number of older participants (age >55; n = 38). Standardized neuropsychological assessments were administered to all participants and were comprised of the Halstead Reitan Battery and portions of the Wechsler Adult Intelligence Scale-III. A comprehensive test battery yielded a moderate increase over base-rate in predictive accuracy that generalized to older individuals. There was only limited support for using a brief battery, for although sensitivity was high, specificity was low. We found that a comprehensive neuropsychological test battery provided good classification accuracy for predicting IADL capacity.
Grotjahn, Richard; Black, Robert; Leung, Ruby; ...
2015-05-22
This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic tomore » planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so more research is needed to understand the limitations of climate models and improve model skill in simulating extreme temperatures and their associated LSMPs. Furthermore, the paper concludes with unresolved issues and research questions.« less
NASA Astrophysics Data System (ADS)
Dairaku, K.
2017-12-01
The Asia-Pacific regions are increasingly threatened by large scale natural disasters. Growing concerns that loss and damages of natural disasters are projected to further exacerbate by climate change and socio-economic change. Climate information and services for risk assessments are of great concern. Fundamental regional climate information is indispensable for understanding changing climate and making decisions on when and how to act. To meet with the needs of stakeholders such as National/local governments, spatio-temporal comprehensive and consistent information is necessary and useful for decision making. Multi-model ensemble regional climate scenarios with 1km horizontal grid-spacing over Japan are developed by using CMIP5 37 GCMs (RCP8.5) and a statistical downscaling (Bias Corrected Spatial Disaggregation (BCSD)) to investigate uncertainty of projected change associated with structural differences of the GCMs for the periods of historical climate (1950-2005) and near future climate (2026-2050). Statistical downscaling regional climate scenarios show good performance for annual and seasonal averages for precipitation and temperature. The regional climate scenarios show systematic underestimate of extreme events such as hot days of over 35 Celsius and annual maximum daily precipitation because of the interpolation processes in the BCSD method. Each model projected different responses in near future climate because of structural differences. The most of CMIP5 37 models show qualitatively consistent increase of average and extreme temperature and precipitation. The added values of statistical/dynamical downscaling methods are also investigated for locally forced nonlinear phenomena, extreme events.
High-order fuzzy time-series based on multi-period adaptation model for forecasting stock markets
NASA Astrophysics Data System (ADS)
Chen, Tai-Liang; Cheng, Ching-Hsue; Teoh, Hia-Jong
2008-02-01
Stock investors usually make their short-term investment decisions according to recent stock information such as the late market news, technical analysis reports, and price fluctuations. To reflect these short-term factors which impact stock price, this paper proposes a comprehensive fuzzy time-series, which factors linear relationships between recent periods of stock prices and fuzzy logical relationships (nonlinear relationships) mined from time-series into forecasting processes. In empirical analysis, the TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) and HSI (Heng Seng Index) are employed as experimental datasets, and four recent fuzzy time-series models, Chen’s (1996), Yu’s (2005), Cheng’s (2006) and Chen’s (2007), are used as comparison models. Besides, to compare with conventional statistic method, the method of least squares is utilized to estimate the auto-regressive models of the testing periods within the databases. From analysis results, the performance comparisons indicate that the multi-period adaptation model, proposed in this paper, can effectively improve the forecasting performance of conventional fuzzy time-series models which only factor fuzzy logical relationships in forecasting processes. From the empirical study, the traditional statistic method and the proposed model both reveal that stock price patterns in the Taiwan stock and Hong Kong stock markets are short-term.
MetaGenyo: a web tool for meta-analysis of genetic association studies.
Martorell-Marugan, Jordi; Toro-Dominguez, Daniel; Alarcon-Riquelme, Marta E; Carmona-Saez, Pedro
2017-12-16
Genetic association studies (GAS) aims to evaluate the association between genetic variants and phenotypes. In the last few years, the number of this type of study has increased exponentially, but the results are not always reproducible due to experimental designs, low sample sizes and other methodological errors. In this field, meta-analysis techniques are becoming very popular tools to combine results across studies to increase statistical power and to resolve discrepancies in genetic association studies. A meta-analysis summarizes research findings, increases statistical power and enables the identification of genuine associations between genotypes and phenotypes. Meta-analysis techniques are increasingly used in GAS, but it is also increasing the amount of published meta-analysis containing different errors. Although there are several software packages that implement meta-analysis, none of them are specifically designed for genetic association studies and in most cases their use requires advanced programming or scripting expertise. We have developed MetaGenyo, a web tool for meta-analysis in GAS. MetaGenyo implements a complete and comprehensive workflow that can be executed in an easy-to-use environment without programming knowledge. MetaGenyo has been developed to guide users through the main steps of a GAS meta-analysis, covering Hardy-Weinberg test, statistical association for different genetic models, analysis of heterogeneity, testing for publication bias, subgroup analysis and robustness testing of the results. MetaGenyo is a useful tool to conduct comprehensive genetic association meta-analysis. The application is freely available at http://bioinfo.genyo.es/metagenyo/ .
The cost of starting and maintaining a large home hemodialysis program.
Komenda, Paul; Copland, Michael; Makwana, Jay; Djurdjev, Ogdjenka; Sood, Manish M; Levin, Adeera
2010-06-01
Home extended hours hemodialysis improves some measurable biological and quality-of-life parameters over conventional renal replacement therapies in patients with end-stage renal disease. Published small studies evaluating costs have shown savings in terms of ongoing operating costs with this modality. However, all estimates need to include the total costs, including infrastructure, patient training, and maintenance; patient attrition by death, transplantation, technique failure; and the necessity of in-center dialysis. We describe a comprehensive funding model for a large centrally administered but locally delivered home hemodialysis program in British Columbia, Canada that covered 122 patients, of which 113 were still in the program at study end. The majority of patients performed home nocturnal hemodialysis in this 2-year retrospective study. All training periods, both in-center and in-home dialysis, medications, hospitalizations, and deaths were captured using our provincial renal database and vital statistics. Comparative data from the provincial database and pricing models were used for costing purposes. The total comprehensive costs per patient-incorporating startup, home, and in-center dialysis; medications; home remodeling; and consumables-was $59,179 for years 2004-2005 and $48,648 for 2005-2006. The home dialysis patients required multiple in-center dialysis runs, significantly contributing to the overall costs. Our study describes a valid, comprehensive funding model delineating reliable cost estimates of starting and maintaining a large home-based hemodialysis program. Consideration of hidden costs is important for administrators and planners to take into account when designing budgets for home hemodialysis.
Yang, Weichao; Xu, Kui; Lian, Jijian; Bin, Lingling; Ma, Chao
2018-05-01
Flood is a serious challenge that increasingly affects the residents as well as policymakers. Flood vulnerability assessment is becoming gradually relevant in the world. The purpose of this study is to develop an approach to reveal the relationship between exposure, sensitivity and adaptive capacity for better flood vulnerability assessment, based on the fuzzy comprehensive evaluation method (FCEM) and coordinated development degree model (CDDM). The approach is organized into three parts: establishment of index system, assessment of exposure, sensitivity and adaptive capacity, and multiple flood vulnerability assessment. Hydrodynamic model and statistical data are employed for the establishment of index system; FCEM is used to evaluate exposure, sensitivity and adaptive capacity; and CDDM is applied to express the relationship of the three components of vulnerability. Six multiple flood vulnerability types and four levels are proposed to assess flood vulnerability from multiple perspectives. Then the approach is applied to assess the spatiality of flood vulnerability in Hainan's eastern area, China. Based on the results of multiple flood vulnerability, a decision-making process for rational allocation of limited resources is proposed and applied to the study area. The study shows that multiple flood vulnerability assessment can evaluate vulnerability more completely, and help decision makers learn more information about making decisions in a more comprehensive way. In summary, this study provides a new way for flood vulnerability assessment and disaster prevention decision. Copyright © 2018 Elsevier Ltd. All rights reserved.
A Principal Component Analysis/Fuzzy Comprehensive Evaluation for Rockburst Potential in Kimberlite
NASA Astrophysics Data System (ADS)
Pu, Yuanyuan; Apel, Derek; Xu, Huawei
2018-02-01
Kimberlite is an igneous rock which sometimes bears diamonds. Most of the diamonds mined in the world today are found in kimberlite ores. Burst potential in kimberlite has not been investigated, because kimberlite is mostly mined using open-pit mining, which poses very little threat of rock bursting. However, as the mining depth keeps increasing, the mines convert to underground mining methods, which can pose a threat of rock bursting in kimberlite. This paper focuses on the burst potential of kimberlite at a diamond mine in northern Canada. A combined model with the methods of principal component analysis (PCA) and fuzzy comprehensive evaluation (FCE) is developed to process data from 12 different locations in kimberlite pipes. Based on calculated 12 fuzzy evaluation vectors, 8 locations show a moderate burst potential, 2 locations show no burst potential, and 2 locations show strong and violent burst potential, respectively. Using statistical principles, a Mahalanobis distance is adopted to build a comprehensive fuzzy evaluation vector for the whole mine and the final evaluation for burst potential is moderate, which is verified by a practical rockbursting situation at mine site.
Nonparametric statistical modeling of binary star separations
NASA Technical Reports Server (NTRS)
Heacox, William D.; Gathright, John
1994-01-01
We develop a comprehensive statistical model for the distribution of observed separations in binary star systems, in terms of distributions of orbital elements, projection effects, and distances to systems. We use this model to derive several diagnostics for estimating the completeness of imaging searches for stellar companions, and the underlying stellar multiplicities. In application to recent imaging searches for low-luminosity companions to nearby M dwarf stars, and for companions to young stars in nearby star-forming regions, our analyses reveal substantial uncertainty in estimates of stellar multiplicity. For binary stars with late-type dwarf companions, semimajor axes appear to be distributed approximately as a(exp -1) for values ranging from about one to several thousand astronomical units. About one-quarter of the companions to field F and G dwarf stars have semimajor axes less than 1 AU, and about 15% lie beyond 1000 AU. The geometric efficiency (fraction of companions imaged onto the detector) of imaging searches is nearly independent of distances to program stars and orbital eccentricities, and varies only slowly with detector spatial limitations.
Parvaneh, Khalil; Shariati, Alireza
2017-09-07
In this study, a new modification of the perturbed chain-statistical associating fluid theory (PC-SAFT) has been proposed by incorporating the lattice fluid theory of Guggenheim as an additional term to the original PC-SAFT terms. As the proposed model has one more term than the PC-SAFT, a new mixing rule has been developed especially for the new additional term, while for the conventional terms of the PC-SAFT, the one-fluid mixing rule is used. In order to evaluate the proposed model, the vapor-liquid equilibria were estimated for binary CO 2 mixtures with 16 different ionic liquids (ILs) of the 1-alkyl-3-methylimidazolium family with various anions consisting of bis(trifluoromethylsulfonyl) imide, hexafluorophosphate, tetrafluoroborate, and trifluoromethanesulfonate. For a comprehensive comparison, three different modes (different adjustable parameters) of the proposed model were compared with the conventional PC-SAFT. Results indicate that the proposed modification of the PC-SAFT EoS is generally more reliable with respect to the conventional PC-SAFT in all the three proposed modes of vapor-liquid equilibria, giving good agreement with literature data.
Analytical mesoscale modeling of aeolian sand transport
NASA Astrophysics Data System (ADS)
Lämmel, Marc; Kroy, Klaus
2017-11-01
The mesoscale structure of aeolian sand transport determines a variety of natural phenomena studied in planetary and Earth science. We analyze it theoretically beyond the mean-field level, based on the grain-scale transport kinetics and splash statistics. A coarse-grained analytical model is proposed and verified by numerical simulations resolving individual grain trajectories. The predicted height-resolved sand flux and other important characteristics of the aeolian transport layer agree remarkably well with a comprehensive compilation of field and wind-tunnel data, suggesting that the model robustly captures the essential mesoscale physics. By comparing the predicted saturation length with field data for the minimum sand-dune size, we elucidate the importance of intermittent turbulent wind fluctuations for field measurements and reconcile conflicting previous models for this most enigmatic emergent aeolian scale.
Sturgeon, John A; Zautra, Alex J
2013-03-01
Pain is a complex construct that contributes to profound physical and psychological dysfunction, particularly in individuals coping with chronic pain. The current paper builds upon previous research, describes a balanced conceptual model that integrates aspects of both psychological vulnerability and resilience to pain, and reviews protective and exacerbating psychosocial factors to the process of adaptation to chronic pain, including pain catastrophizing, pain acceptance, and positive psychological resources predictive of enhanced pain coping. The current paper identifies future directions for research that will further enrich the understanding of pain adaptation and espouses an approach that will enhance the ecological validity of psychological pain coping models, including introduction of advanced statistical and conceptual models that integrate behavioral, cognitive, information processing, motivational and affective theories of pain.
Novel approaches to the study of particle dark matter in astrophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Argüelles, C. R., E-mail: carlos.arguelles@icranet.org; Ruffini, R., E-mail: ruffini@icra.it; Rueda, J. A., E-mail: jorge.rueda@icra.it
A deep understanding of the role of the dark matter in the different astrophysical scenarios of the local Universe such as galaxies, represent a crucial step to describe in a more consistent way the role of dark matter in cosmology. This kind of studies requires the interconnection between particle physics within and beyond the Standard Model, and fundamental physics such as thermodynamics and statistics, within a fully relativistic treatment of Gravity. After giving a comprehensive summary of the different types of dark matter and their role in astrophysics, we discuss the recent efforts in describing the distribution of dark mattermore » in the center and halo of galaxies from first principles such as gravitational interactions, quantum statistics and particle physics; and its implications with the observations.« less
Enabling a Comprehensive Teaching Strategy: Video Lectures
ERIC Educational Resources Information Center
Brecht, H. David; Ogilby, Suzanne M.
2008-01-01
This study empirically tests the feasibility and effectiveness of video lectures as a form of video instruction that enables a comprehensive teaching strategy used throughout a traditional classroom course. It examines student use patterns and the videos' effects on student learning, using qualitative and nonparametric statistical analyses of…
ERIC Educational Resources Information Center
Hartwig, Elizabeth Kjellstrand; Van Overschelde, James P.
2016-01-01
The authors investigated predictor variables for the Counselor Preparation Comprehensive Examination (CPCE) to examine whether academic variables, demographic variables, and test version were associated with graduate counseling students' CPCE scores. Multiple regression analyses revealed all 3 variables were statistically significant predictors of…
The Problem of Size in Robust Design
NASA Technical Reports Server (NTRS)
Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri
1997-01-01
To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.
Thrasher, James F; Besley, John C; González, Wendy
2010-03-01
The World Health Organization's Framework Convention on Tobacco Control promotes comprehensive smoke-free laws. The effective implementation of these laws requires citizen participation and support. Risk communication research suggests that citizens' perceptions of the fairness of smoke-free laws would help explain their support for the law. This study aimed to assess the factors that correlate with citizens' perceptions of the distributive, procedural and interpersonal justice of smoke-free laws, as well as how these perceptions are related to support for and intention to help enforce these laws. Study data came from a cross-sectional, population-based survey of 800 Mexico City inhabitants before a comprehensive smoke-free policy was implemented there in 2008. Structural equation modeling was used to estimate the bivariate and multivariate adjusted paths relating study variables. In the final multivariate model, the three justice concepts mediated the influence of smoking status, perceived dangers of secondhand smoke exposure, strength of home smoking ban, and perceived rights of smokers on the two distal constructs of support for smoke-free policy and intention to help enforce it. Statistically significant paths were estimated from distributive and procedural justice to support for the law and intention help enforce it. The path from interpersonal justice to support for the law was not significant, but the path to intention to help enforce the law was. Finally, the path from support for the law to the intention to enforce it was statistically significant. These results suggest that three distinct dimensions of perceived justice help explain citizen support for smoke-free policies. These dimensions of perceived justice may explain the conditions under which smoke-free policies are effectively implemented and could help shape the focus for communication strategies that aim to ensure effective implementation of this and other public health policies. 2009 Elsevier Ltd. All rights reserved.
2011 statistical abstract of the United States
Krisanda, Joseph M.
2011-01-01
The Statistical Abstract of the United States, published since 1878, is the authoritative and comprehensive summary of statistics on the social, political, and economic organization of the United States.Use the Abstract as a convenient volume for statistical reference, and as a guide to sources of more information both in print and on the Web.Sources of data include the Census Bureau, Bureau of Labor Statistics, Bureau of Economic Analysis, and many other Federal agencies and private organizations.
Pearce, B.D.; Grove, J.; Bonney, E.A.; Bliwise, N.; Dudley, D.J.; Schendel, D.E.; Thorsen, P.
2010-01-01
Background/Aims To examine the relationship of biological mediators (cytokines, stress hormones), psychosocial, obstetric history, and demographic factors in the early prediction of preterm birth (PTB) using a comprehensive logistic regression model incorporating diverse risk factors. Methods In this prospective case-control study, maternal serum biomarkers were quantified at 9–23 weeks’ gestation in 60 women delivering at <37 weeks compared to 123 women delivering at term. Biomarker data were combined with maternal sociodemographic factors and stress data into regression models encompassing 22 preterm risk factors and 1st-order interactions. Results Among individual biomarkers, we found that macrophage migration inhibitory factor (MIF), interleukin-10, C-reactive protein (CRP), and tumor necrosis factor-α were statistically significant predictors of PTB at all cutoff levels tested (75th, 85th, and 90th percentiles). We fit multifactor models for PTB prediction at each biomarker cutoff. Our best models revealed that MIF, CRP, risk-taking behavior, and low educational attainment were consistent predictors of PTB at all biomarker cutoffs. The 75th percentile cutoff yielded the best predicting model with an area under the ROC curve of 0.808 (95% CI 0.743–0.874). Conclusion Our comprehensive models highlight the prominence of behavioral risk factors for PTB and point to MIF as a possible psychobiological mediator. PMID:20160447
Pearce, B D; Grove, J; Bonney, E A; Bliwise, N; Dudley, D J; Schendel, D E; Thorsen, P
2010-01-01
To examine the relationship of biological mediators (cytokines, stress hormones), psychosocial, obstetric history, and demographic factors in the early prediction of preterm birth (PTB) using a comprehensive logistic regression model incorporating diverse risk factors. In this prospective case-control study, maternal serum biomarkers were quantified at 9-23 weeks' gestation in 60 women delivering at <37 weeks compared to 123 women delivering at term. Biomarker data were combined with maternal sociodemographic factors and stress data into regression models encompassing 22 preterm risk factors and 1st-order interactions. Among individual biomarkers, we found that macrophage migration inhibitory factor (MIF), interleukin-10, C-reactive protein (CRP), and tumor necrosis factor-alpha were statistically significant predictors of PTB at all cutoff levels tested (75th, 85th, and 90th percentiles). We fit multifactor models for PTB prediction at each biomarker cutoff. Our best models revealed that MIF, CRP, risk-taking behavior, and low educational attainment were consistent predictors of PTB at all biomarker cutoffs. The 75th percentile cutoff yielded the best predicting model with an area under the ROC curve of 0.808 (95% CI 0.743-0.874). Our comprehensive models highlight the prominence of behavioral risk factors for PTB and point to MIF as a possible psychobiological mediator. Copyright (c) 2010 S. Karger AG, Basel.
Predicting institutionalization after traumatic brain injury inpatient rehabilitation.
Eum, Regina S; Seel, Ronald T; Goldstein, Richard; Brown, Allen W; Watanabe, Thomas K; Zasler, Nathan D; Roth, Elliot J; Zafonte, Ross D; Glenn, Mel B
2015-02-15
Risk factors contributing to institutionalization after inpatient rehabilitation for people with traumatic brain injury (TBI) have not been well studied and need to be better understood to guide clinicians during rehabilitation. We aimed to develop a prognostic model that could be used at admission to inpatient rehabilitation facilities to predict discharge disposition. The model could be used to provide the interdisciplinary team with information regarding aspects of patients' functioning and/or their living situation that need particular attention during inpatient rehabilitation if institutionalization is to be avoided. The study population included 7219 patients with moderate-severe TBI in the Traumatic Brain Injury Model Systems (TBIMS) National Database enrolled from 2002-2012 who had not been institutionalized prior to injury. Based on institutionalization predictors in other populations, we hypothesized that among people who had lived at a private residence prior to injury, greater dependence in locomotion, bed-chair-wheelchair transfers, bladder and bowel continence, feeding, and comprehension at admission to inpatient rehabilitation programs would predict institutionalization at discharge. Logistic regression was used, with adjustment for demographic factors, proxy measures for TBI severity, and acute-care length-of-stay. C-statistic and predictiveness curves validated a five-variable model. Higher levels of independence in bladder management (adjusted odds ratio [OR], 0.88; 95% CI 0.83, 0.93), bed-chair-wheelchair transfers (OR, 0.81 [95% CI, 0.83-0.93]), and comprehension (OR, 0.78 [95% CI, 0.68, 0.89]) at admission were associated with lower risks of institutionalization on discharge. For every 10-year increment in age was associated with a 1.38 times higher risk for institutionalization (95% CI, 1.29, 1.48) and living alone was associated with a 2.34 times higher risk (95% CI, 1.86, 2.94). The c-statistic was 0.780. We conclude that this simple model can predict risk of institutionalization after inpatient rehabilitation for patients with TBI.
The Impact of Language Experience on Language and Reading: A Statistical Learning Approach
ERIC Educational Resources Information Center
Seidenberg, Mark S.; MacDonald, Maryellen C.
2018-01-01
This article reviews the important role of statistical learning for language and reading development. Although statistical learning--the unconscious encoding of patterns in language input--has become widely known as a force in infants' early interpretation of speech, the role of this kind of learning for language and reading comprehension in…
Chinese Obstetrics & Gynecology journal club: a randomised controlled trial.
Tsui, Ilene K; Dodson, William C; Kunselman, Allen R; Kuang, Hongying; Han, Feng-Juan; Legro, Richard S; Wu, Xiao-Ke
2016-01-28
To assess whether a journal club model could improve comprehension and written and spoken medical English in a population of Chinese medical professionals. The study population consisted of 52 medical professionals who were residents or postgraduate master or PhD students in the Department of Obstetrics and Gynecology, Heilongjiang University of Chinese Medicine, China. After a three-part baseline examination to assess medical English comprehension, participants were randomised to either (1) an intensive journal club treatment arm or (2) a self-study group. At the conclusion of the 8-week intervention participants (n=52) were re-tested with new questions. The primary outcome was the change in score on a multiple choice examination. Secondary outcomes included change in scores on written and oral examinations which were modelled on the Test of English as a Foreign Language (TOEFL). Both groups had improved scores on the multiple choice examination without a statistically significant difference between them (90% power). However, there was a statistically significant difference between the groups in mean improvement in scores for both written (95% CI 1.1 to 5.0; p=0.003) and spoken English (95% CI 0.06 to 3.7; p=0.04) favouring the journal club intervention. Interacting with colleagues and an English-speaking facilitator in a journal club improved both written and spoken medical English in Chinese medical professionals. Journal clubs may be suitable for use as a self-sustainable teaching model to improve fluency in medical English in foreign medical professionals. NCT01844609. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Paré, Pierre; Lee, Joanna; Hawes, Ian A
2010-03-01
To determine whether strategies to counsel and empower patients with heartburn-predominant dyspepsia could improve health-related quality of life. Using a cluster randomized, parallel group, multicentre design, nine centres were assigned to provide either basic or comprehensive counselling to patients (age range 18 to 50 years) presenting with heartburn-predominant upper gastrointestinal symptoms, who would be considered for drug therapy without further investigation. Patients were treated for four weeks with esomeprazole 40 mg once daily, followed by six months of treatment that was at the physician's discretion. The primary end point was the baseline change in Quality of Life in Reflux and Dyspepsia (QOLRAD) questionnaire score. A total of 135 patients from nine centres were included in the intention-to-treat analysis. There was a statistically significant baseline improvement in all domains of the QOLRAD questionnaire in both study arms at four and seven months (P<0.0001). After four months, the overall mean change in QOLRAD score appeared greater in the comprehensive counselling group than in the basic counselling group (1.77 versus 1.47, respectively); however, this difference was not statistically significant (P=0.07). After seven months, the overall mean baseline change in QOLRAD score between the comprehensive and basic counselling groups was not statistically significant (1.69 versus 1.56, respectively; P=0.63). A standardized, comprehensive counselling intervention showed a positive initial trend in improving quality of life in patients with heartburn-predominant uninvestigated dyspepsia. Further investigation is needed to confirm the potential benefits of providing patients with comprehensive counselling regarding disease management.
Paré, Pierre; Math, Joanna Lee M; Hawes, Ian A
2010-01-01
OBJECTIVE: To determine whether strategies to counsel and empower patients with heartburn-predominant dyspepsia could improve health-related quality of life. METHODS: Using a cluster randomized, parallel group, multicentre design, nine centres were assigned to provide either basic or comprehensive counselling to patients (age range 18 to 50 years) presenting with heartburn-predominant upper gastrointestinal symptoms, who would be considered for drug therapy without further investigation. Patients were treated for four weeks with esomeprazole 40 mg once daily, followed by six months of treatment that was at the physician’s discretion. The primary end point was the baseline change in Quality of Life in Reflux and Dyspepsia (QOLRAD) questionnaire score. RESULTS: A total of 135 patients from nine centres were included in the intention-to-treat analysis. There was a statistically significant baseline improvement in all domains of the QOLRAD questionnaire in both study arms at four and seven months (P<0.0001). After four months, the overall mean change in QOLRAD score appeared greater in the comprehensive counselling group than in the basic counselling group (1.77 versus 1.47, respectively); however, this difference was not statistically significant (P=0.07). After seven months, the overall mean baseline change in QOLRAD score between the comprehensive and basic counselling groups was not statistically significant (1.69 versus 1.56, respectively; P=0.63). CONCLUSIONS: A standardized, comprehensive counselling intervention showed a positive initial trend in improving quality of life in patients with heartburn-predominant uninvestigated dyspepsia. Further investigation is needed to confirm the potential benefits of providing patients with comprehensive counselling regarding disease management. PMID:20352148
Further characterisation of the functional neuroanatomy associated with prosodic emotion decoding.
Mitchell, Rachel L C
2013-06-01
Current models of prosodic emotion comprehension propose a three stage cognition mediated by temporal lobe auditory regions through to inferior and orbitofrontal regions. Cumulative evidence suggests that its mediation may be more flexible though, with a facility to respond in a graded manner based on the need for executive control. The location of this fine-tuning system is unclear, as is its similarity to the cognitive control system. In the current study, need for executive control was manipulated in a block-design functional MRI study by systematically altering the proportion of incongruent trials across time, i.e., trials for which participants identified prosodic emotions in the face of conflicting lexico-semantic emotion cues. Resultant Blood Oxygenation Level Dependent contrast data were analysed according to standard procedures using Statistical Parametric Mapping v8 (Ashburner et al., 2009). In the parametric analyses, superior (medial) frontal gyrus activity increased linearly with increased need for executive control. In the separate analyses of each level of incongruity, results suggested that the baseline prosodic emotion comprehension system was sufficient to deal with low proportions of incongruent trials, whereas a more widespread frontal lobe network was required for higher proportions. These results suggest an executive control system for prosodic emotion comprehension exists which has the capability to recruit superior (medial) frontal gyrus in a graded manner and other frontal regions once demand exceeds a certain threshold. The need to revise current models of prosodic emotion comprehension and add a fourth processing stage are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.
Winslow, Luke; Zwart, Jacob A.; Batt, Ryan D.; Dugan, Hilary; Woolway, R. Iestyn; Corman, Jessica; Hanson, Paul C.; Read, Jordan S.
2016-01-01
Metabolism is a fundamental process in ecosystems that crosses multiple scales of organization from individual organisms to whole ecosystems. To improve sharing and reuse of published metabolism models, we developed LakeMetabolizer, an R package for estimating lake metabolism from in situ time series of dissolved oxygen, water temperature, and, optionally, additional environmental variables. LakeMetabolizer implements 5 different metabolism models with diverse statistical underpinnings: bookkeeping, ordinary least squares, maximum likelihood, Kalman filter, and Bayesian. Each of these 5 metabolism models can be combined with 1 of 7 models for computing the coefficient of gas exchange across the air–water interface (k). LakeMetabolizer also features a variety of supporting functions that compute conversions and implement calculations commonly applied to raw data prior to estimating metabolism (e.g., oxygen saturation and optical conversion models). These tools have been organized into an R package that contains example data, example use-cases, and function documentation. The release package version is available on the Comprehensive R Archive Network (CRAN), and the full open-source GPL-licensed code is freely available for examination and extension online. With this unified, open-source, and freely available package, we hope to improve access and facilitate the application of metabolism in studies and management of lentic ecosystems.
NASA Astrophysics Data System (ADS)
Jarzyna, Jadwiga A.; Krakowska, Paulina I.; Puskarczyk, Edyta; Wawrzyniak-Guz, Kamila; Zych, Marcin
2018-03-01
More than 70 rock samples from so-called sweet spots, i.e. the Ordovician Sa Formation and Silurian Ja Member of Pa Formation from the Baltic Basin (North Poland) were examined in the laboratory to determine bulk and grain density, total and effective/dynamic porosity, absolute permeability, pore diameters size, total surface area, and natural radioactivity. Results of the pyrolysis, i.e., TOC (Total Organic Carbon) together with S1 and S2 - parameters used to determine the hydrocarbon generation potential of rocks, were also considered. Elemental composition from chemical analyses and mineral composition from XRD measurements were also included. SCAL analysis, NMR experiments, Pressure Decay Permeability measurements together with water immersion porosimetry and adsorption/ desorption of nitrogen vapors method were carried out along with the comprehensive interpretation of the outcomes. Simple and multiple linear statistical regressions were used to recognize mutual relationships between parameters. Observed correlations and in some cases big dispersion of data and discrepancies in the property values obtained from different methods were the basis for building shale gas rock model for well logging interpretation. The model was verified by the result of the Monte Carlo modelling of spectral neutron-gamma log response in comparison with GEM log results.
Damron, T A; McBeath, A A
1995-04-01
With the increasing duration of follow up on total knee arthroplasties, more revision arthroplasties are being performed. When revision is not advisable, a salvage procedure such as arthrodesis or resection arthroplasty is indicated. This article provides a comprehensive review of the literature regarding arthrodesis following failed total knee arthroplasty. In addition, a statistical meta-analysis of five studies using modern arthrodesis techniques is presented. A statistically significant greater fusion rate with intramedullary nail arthrodesis compared to external fixation is documented. Gram negative and mixed infections are found to be significant risk factors for failure of arthrodesis.
Kling, Teresia; Johansson, Patrik; Sanchez, José; Marinescu, Voichita D.; Jörnsten, Rebecka; Nelander, Sven
2015-01-01
Statistical network modeling techniques are increasingly important tools to analyze cancer genomics data. However, current tools and resources are not designed to work across multiple diagnoses and technical platforms, thus limiting their applicability to comprehensive pan-cancer datasets such as The Cancer Genome Atlas (TCGA). To address this, we describe a new data driven modeling method, based on generalized Sparse Inverse Covariance Selection (SICS). The method integrates genetic, epigenetic and transcriptional data from multiple cancers, to define links that are present in multiple cancers, a subset of cancers, or a single cancer. It is shown to be statistically robust and effective at detecting direct pathway links in data from TCGA. To facilitate interpretation of the results, we introduce a publicly accessible tool (cancerlandscapes.org), in which the derived networks are explored as interactive web content, linked to several pathway and pharmacological databases. To evaluate the performance of the method, we constructed a model for eight TCGA cancers, using data from 3900 patients. The model rediscovered known mechanisms and contained interesting predictions. Possible applications include prediction of regulatory relationships, comparison of network modules across multiple forms of cancer and identification of drug targets. PMID:25953855
A synoptic view of the Third Uniform California Earthquake Rupture Forecast (UCERF3)
Field, Edward; Jordan, Thomas H.; Page, Morgan T.; Milner, Kevin R.; Shaw, Bruce E.; Dawson, Timothy E.; Biasi, Glenn; Parsons, Thomas E.; Hardebeck, Jeanne L.; Michael, Andrew J.; Weldon, Ray; Powers, Peter; Johnson, Kaj M.; Zeng, Yuehua; Bird, Peter; Felzer, Karen; van der Elst, Nicholas; Madden, Christopher; Arrowsmith, Ramon; Werner, Maximillan J.; Thatcher, Wayne R.
2017-01-01
Probabilistic forecasting of earthquake‐producing fault ruptures informs all major decisions aimed at reducing seismic risk and improving earthquake resilience. Earthquake forecasting models rely on two scales of hazard evolution: long‐term (decades to centuries) probabilities of fault rupture, constrained by stress renewal statistics, and short‐term (hours to years) probabilities of distributed seismicity, constrained by earthquake‐clustering statistics. Comprehensive datasets on both hazard scales have been integrated into the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3). UCERF3 is the first model to provide self‐consistent rupture probabilities over forecasting intervals from less than an hour to more than a century, and it is the first capable of evaluating the short‐term hazards that result from multievent sequences of complex faulting. This article gives an overview of UCERF3, illustrates the short‐term probabilities with aftershock scenarios, and draws some valuable scientific conclusions from the modeling results. In particular, seismic, geologic, and geodetic data, when combined in the UCERF3 framework, reject two types of fault‐based models: long‐term forecasts constrained to have local Gutenberg–Richter scaling, and short‐term forecasts that lack stress relaxation by elastic rebound.
Prediction, Error, and Adaptation during Online Sentence Comprehension
ERIC Educational Resources Information Center
Fine, Alex Brabham
2013-01-01
A fundamental challenge for human cognition is perceiving and acting in a world in which the statistics that characterize available sensory data are non-stationary. This thesis focuses on this problem specifically in the domain of sentence comprehension, where linguistic variability poses computational challenges to the processes underlying…
ERIC Educational Resources Information Center
Coetzee, Stephen A.; Janse van Rensburg, Cecile; Schmulian, Astrid
2016-01-01
This study explores differences in students' reading comprehension of International Financial Reporting Standards in a South African financial reporting class with a heterogeneous student cohort. Statistically significant differences were identified for prior academic performance, language of instruction, first language and enrolment in the…
Emergent Readers' Social Interaction Styles and Their Comprehension Processes during Buddy Reading
ERIC Educational Resources Information Center
Christ, Tanya; Wang, X. Christine; Chiu, Ming Ming
2015-01-01
To examine the relations between emergent readers' social interaction styles and their comprehension processes, we adapted sociocultural and transactional views of learning and reading, and conducted statistical discourse analysis of 1,359 conversation turns transcribed from 14 preschoolers' 40 buddy reading events. Results show that interaction…
Transportation statistics annual report 1994
DOT National Transportation Integrated Search
1994-01-01
The Transportation Statistics Annual Report (TSAR) provides the most comprehensive overview of U.S. transportation that is done on an annual basis. TSAR examines the extent of the system, how it is used, how well it works, how it affects people and t...
A comparison of hydrologic models for ecological flows and water availability
Caldwell, Peter V; Kennen, Jonathan G.; Sun, Ge; Kiang, Julie E.; Butcher, John B; Eddy, Michelle C; Hay, Lauren E.; LaFontaine, Jacob H.; Hain, Ernie F.; Nelson, Stacy C; McNulty, Steve G
2015-01-01
Robust hydrologic models are needed to help manage water resources for healthy aquatic ecosystems and reliable water supplies for people, but there is a lack of comprehensive model comparison studies that quantify differences in streamflow predictions among model applications developed to answer management questions. We assessed differences in daily streamflow predictions by four fine-scale models and two regional-scale monthly time step models by comparing model fit statistics and bias in ecologically relevant flow statistics (ERFSs) at five sites in the Southeastern USA. Models were calibrated to different extents, including uncalibrated (level A), calibrated to a downstream site (level B), calibrated specifically for the site (level C) and calibrated for the site with adjusted precipitation and temperature inputs (level D). All models generally captured the magnitude and variability of observed streamflows at the five study sites, and increasing level of model calibration generally improved performance. All models had at least 1 of 14 ERFSs falling outside a +/−30% range of hydrologic uncertainty at every site, and ERFSs related to low flows were frequently over-predicted. Our results do not indicate that any specific hydrologic model is superior to the others evaluated at all sites and for all measures of model performance. Instead, we provide evidence that (1) model performance is as likely to be related to calibration strategy as it is to model structure and (2) simple, regional-scale models have comparable performance to the more complex, fine-scale models at a monthly time step.
Analytical aspects of plant metabolite profiling platforms: current standings and future aims.
Seger, Christoph; Sturm, Sonja
2007-02-01
Over the past years, metabolic profiling has been established as a comprehensive systems biology tool. Mass spectrometry or NMR spectroscopy-based technology platforms combined with unsupervised or supervised multivariate statistical methodologies allow a deep insight into the complex metabolite patterns of plant-derived samples. Within this review, we provide a thorough introduction to the analytical hard- and software requirements of metabolic profiling platforms. Methodological limitations are addressed, and the metabolic profiling workflow is exemplified by summarizing recent applications ranging from model systems to more applied topics.
A computerized data base of nitrate concentrations in Indiana ground water
Risch, M.R.; Cohen, D.A.
1995-01-01
The nitrate data base was compiled from numerous data sets that were readily accessible in electronic format. The uses of these data may be limited because they were neither comprehensive nor of a single statistical design. Nonetheless, the nitrate data can be used in several ways: (1) to identify geographic areas with and without nitrate data; (2) to evaluate assumptions, models, and maps of ground-water-contamination potential; and (3) to investigate the relation between environmental factors, land-use types, and the occurrence of nitrate.
NASA Astrophysics Data System (ADS)
Boudard, Emmanuel; Morlaix, Sophie
2003-09-01
This article addresses the main predictors of adult education, using statistical methods different from those generally used by social science researchers. Its aim is twofold. First, it seeks to explain in a simple and comprehensible manner the methodological value of these methods (in relation to the use of structural models); secondly, it demonstrates the concrete usefulness of these methods on the basis of a recent piece of research on the data from the International Adult Literacy Survey (IALS).
Glass-Kaastra, Shiona K.; Pearl, David L.; Reid-Smith, Richard J.; McEwen, Beverly; Slavic, Durda; McEwen, Scott A.; Fairles, Jim
2014-01-01
Antimicrobial susceptibility data on Escherichia coli F4, Pasteurella multocida, and Streptococcus suis isolates from Ontario swine (January 1998 to October 2010) were acquired from a comprehensive diagnostic veterinary laboratory in Ontario, Canada. In relation to the possible development of a surveillance system for antimicrobial resistance, data were assessed for ease of management, completeness, consistency, and applicability for temporal and spatial statistical analyses. Limited farm location data precluded spatial analyses and missing demographic data limited their use as predictors within multivariable statistical models. Changes in the standard panel of antimicrobials used for susceptibility testing reduced the number of antimicrobials available for temporal analyses. Data consistency and quality could improve over time in this and similar diagnostic laboratory settings by encouraging complete reporting with sample submission and by modifying database systems to limit free-text data entry. These changes could make more statistical methods available for disease surveillance and cluster detection. PMID:24688133
Glass-Kaastra, Shiona K; Pearl, David L; Reid-Smith, Richard J; McEwen, Beverly; Slavic, Durda; McEwen, Scott A; Fairles, Jim
2014-04-01
Antimicrobial susceptibility data on Escherichia coli F4, Pasteurella multocida, and Streptococcus suis isolates from Ontario swine (January 1998 to October 2010) were acquired from a comprehensive diagnostic veterinary laboratory in Ontario, Canada. In relation to the possible development of a surveillance system for antimicrobial resistance, data were assessed for ease of management, completeness, consistency, and applicability for temporal and spatial statistical analyses. Limited farm location data precluded spatial analyses and missing demographic data limited their use as predictors within multivariable statistical models. Changes in the standard panel of antimicrobials used for susceptibility testing reduced the number of antimicrobials available for temporal analyses. Data consistency and quality could improve over time in this and similar diagnostic laboratory settings by encouraging complete reporting with sample submission and by modifying database systems to limit free-text data entry. These changes could make more statistical methods available for disease surveillance and cluster detection.
2011 statistical abstract of the United States
Krisanda, Joseph M.
2011-01-01
The Statistical Abstract of the United States, published since 1878, is the authoritative and comprehensive summary of statistics on the social, political, and economic organization of the United States.
Use the Abstract as a convenient volume for statistical reference, and as a guide to sources of more information both in print and on the Web.
Sources of data include the Census Bureau, Bureau of Labor Statistics, Bureau of Economic Analysis, and many other Federal agencies and private organizations.
NASA Astrophysics Data System (ADS)
Eum, H. I.; Cannon, A. J.
2015-12-01
Climate models are a key provider to investigate impacts of projected future climate conditions on regional hydrologic systems. However, there is a considerable mismatch of spatial resolution between GCMs and regional applications, in particular a region characterized by complex terrain such as Korean peninsula. Therefore, a downscaling procedure is an essential to assess regional impacts of climate change. Numerous statistical downscaling methods have been used mainly due to the computational efficiency and simplicity. In this study, four statistical downscaling methods [Bias-Correction/Spatial Disaggregation (BCSD), Bias-Correction/Constructed Analogue (BCCA), Multivariate Adaptive Constructed Analogs (MACA), and Bias-Correction/Climate Imprint (BCCI)] are applied to downscale the latest Climate Forecast System Reanalysis data to stations for precipitation, maximum temperature, and minimum temperature over South Korea. By split sampling scheme, all methods are calibrated with observational station data for 19 years from 1973 to 1991 are and tested for the recent 19 years from 1992 to 2010. To assess skill of the downscaling methods, we construct a comprehensive suite of performance metrics that measure an ability of reproducing temporal correlation, distribution, spatial correlation, and extreme events. In addition, we employ Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to identify robust statistical downscaling methods based on the performance metrics for each season. The results show that downscaling skill is considerably affected by the skill of CFSR and all methods lead to large improvements in representing all performance metrics. According to seasonal performance metrics evaluated, when TOPSIS is applied, MACA is identified as the most reliable and robust method for all variables and seasons. Note that such result is derived from CFSR output which is recognized as near perfect climate data in climate studies. Therefore, the ranking of this study may be changed when various GCMs are downscaled and evaluated. Nevertheless, it may be informative for end-users (i.e. modelers or water resources managers) to understand and select more suitable downscaling methods corresponding to priorities on regional applications.
The association between education and induced abortion for three cohorts of adults in Finland.
Väisänen, Heini
2015-01-01
This paper explores whether the likelihood of abortion by education changed over time in Finland, where comprehensive family planning services and sexuality education have been available since the early 1970s. This subject has not previously been studied longitudinally with comprehensive and reliable data. A unique longitudinal set of register data of more than 250,000 women aged 20-49 born in 1955-59, 1965-69, and 1975-79 was analysed, using descriptive statistics, concentration curves, and discrete-time event-history models. Women with basic education had a higher likelihood of abortion than others and the association grew stronger for later cohorts. Selection into education may explain this phenomenon: although it was fairly common to have only basic education in the 1955-59 cohort, it became increasingly unusual over time. Thus, even though family planning services were easily available, socio-economic differences in the likelihood of abortion remained.
Climate Change Impacts at Department of Defense
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotamarthi, Rao; Wang, Jiali; Zoebel, Zach
This project is aimed at providing the U.S. Department of Defense (DoD) with a comprehensive analysis of the uncertainty associated with generating climate projections at the regional scale that can be used by stakeholders and decision makers to quantify and plan for the impacts of future climate change at specific locations. The merits and limitations of commonly used downscaling models, ranging from simple to complex, are compared, and their appropriateness for application at installation scales is evaluated. Downscaled climate projections are generated at selected DoD installations using dynamic and statistical methods with an emphasis on generating probability distributions of climatemore » variables and their associated uncertainties. The sites selection and selection of variables and parameters for downscaling was based on a comprehensive understanding of the current and projected roles that weather and climate play in operating, maintaining, and planning DoD facilities and installations.« less
NASA Technical Reports Server (NTRS)
Sforza, Mario; Buonomo, Sergio
1993-01-01
During the period 1983-1992 the European Space Agency (ESA) carried out several experimental campaigns to investigate the propagation impairments of the Land Mobile Satellite (LMS) communication channel. A substantial amount of data covering quite a large range of elevation angles, environments, and frequencies was obtained. Results from the data analyses are currently used for system planning and design applications within the framework of the future ESA LMS projects. This comprehensive experimental data base is presently utilized also for channel modeling purposes and preliminary results are given. Cumulative Distribution Functions (PDF) and Duration of Fades (DoF) statistics at different elevation angles and environments were also included.
Clauson, Kevin A; Polen, Hyla H; Peak, Amy S; Marsh, Wallace A; DiScala, Sandra L
2008-11-01
Clinical decision support tools (CDSTs) on personal digital assistants (PDAs) and online databases assist healthcare practitioners who make decisions about dietary supplements. To assess and compare the content of PDA dietary supplement databases and their online counterparts used as CDSTs. A total of 102 question-and-answer pairs were developed within 10 weighted categories of the most clinically relevant aspects of dietary supplement therapy. PDA versions of AltMedDex, Lexi-Natural, Natural Medicines Comprehensive Database, and Natural Standard and their online counterparts were assessed by scope (percent of correct answers present), completeness (3-point scale), ease of use, and a composite score integrating all 3 criteria. Descriptive statistics and inferential statistics, including a chi(2) test, Scheffé's multiple comparison test, McNemar's test, and the Wilcoxon signed rank test were used to analyze data. The scope scores for PDA databases were: Natural Medicines Comprehensive Database 84.3%, Natural Standard 58.8%, Lexi-Natural 50.0%, and AltMedDex 36.3%, with Natural Medicines Comprehensive Database statistically superior (p < 0.01). Completeness scores were: Natural Medicines Comprehensive Database 78.4%, Natural Standard 51.0%, Lexi-Natural 43.5%, and AltMedDex 29.7%. Lexi-Natural was superior in ease of use (p < 0.01). Composite scores for PDA databases were: Natural Medicines Comprehensive Database 79.3, Natural Standard 53.0, Lexi-Natural 48.0, and AltMedDex 32.5, with Natural Medicines Comprehensive Database superior (p < 0.01). There was no difference between the scope for PDA and online database pairs with Lexi-Natural (50.0% and 53.9%, respectively) or Natural Medicines Comprehensive Database (84.3% and 84.3%, respectively) (p > 0.05), whereas differences existed for AltMedDex (36.3% vs 74.5%, respectively) and Natural Standard (58.8% vs 80.4%, respectively) (p < 0.01). For composite scores, AltMedDex and Natural Standard online were better than their PDA counterparts (p < 0.01). Natural Medicines Comprehensive Database achieved significantly higher scope, completeness, and composite scores compared with other dietary supplement PDA CDSTs in this study. There was no difference between the PDA and online databases for Lexi-Natural and Natural Medicines Comprehensive Database, whereas online versions of AltMedDex and Natural Standard were significantly better than their PDA counterparts.
An integrative model of organizational safety behavior.
Cui, Lin; Fan, Di; Fu, Gui; Zhu, Cherrie Jiuhua
2013-06-01
This study develops an integrative model of safety management based on social cognitive theory and the total safety culture triadic framework. The purpose of the model is to reveal the causal linkages between a hazardous environment, safety climate, and individual safety behaviors. Based on primary survey data from 209 front-line workers in one of the largest state-owned coal mining corporations in China, the model is tested using structural equation modeling techniques. An employee's perception of a hazardous environment is found to have a statistically significant impact on employee safety behaviors through a psychological process mediated by the perception of management commitment to safety and individual beliefs about safety. The integrative model developed here leads to a comprehensive solution that takes into consideration the environmental, organizational and employees' psychological and behavioral aspects of safety management. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.
New generation of hydraulic pedotransfer functions for Europe
Tóth, B; Weynants, M; Nemes, A; Makó, A; Bilas, G; Tóth, G
2015-01-01
A range of continental-scale soil datasets exists in Europe with different spatial representation and based on different principles. We developed comprehensive pedotransfer functions (PTFs) for applications principally on spatial datasets with continental coverage. The PTF development included the prediction of soil water retention at various matric potentials and prediction of parameters to characterize soil moisture retention and the hydraulic conductivity curve (MRC and HCC) of European soils. We developed PTFs with a hierarchical approach, determined by the input requirements. The PTFs were derived by using three statistical methods: (i) linear regression where there were quantitative input variables, (ii) a regression tree for qualitative, quantitative and mixed types of information and (iii) mean statistics of developer-defined soil groups (class PTF) when only qualitative input parameters were available. Data of the recently established European Hydropedological Data Inventory (EU-HYDI), which holds the most comprehensive geographical and thematic coverage of hydro-pedological data in Europe, were used to train and test the PTFs. The applied modelling techniques and the EU-HYDI allowed the development of hydraulic PTFs that are more reliable and applicable for a greater variety of input parameters than those previously available for Europe. Therefore the new set of PTFs offers tailored advanced tools for a wide range of applications in the continent. PMID:25866465
Hunt, Randall J.
2012-01-01
Management decisions will often be directly informed by model predictions. However, we now know there can be no expectation of a single ‘true’ model; thus, model results are uncertain. Understandable reporting of underlying uncertainty provides necessary context to decision-makers, as model results are used for management decisions. This, in turn, forms a mechanism by which groundwater models inform a risk-management framework because uncertainty around a prediction provides the basis for estimating the probability or likelihood of some event occurring. Given that the consequences of management decisions vary, it follows that the extent of and resources devoted to an uncertainty analysis may depend on the consequences. For events with low impact, a qualitative, limited uncertainty analysis may be sufficient for informing a decision. For events with a high impact, on the other hand, the risks might be better assessed and associated decisions made using a more robust and comprehensive uncertainty analysis. The purpose of this chapter is to provide guidance on uncertainty analysis through discussion of concepts and approaches, which can vary from heuristic (i.e. the modeller’s assessment of prediction uncertainty based on trial and error and experience) to a comprehensive, sophisticated, statistics-based uncertainty analysis. Most of the material presented here is taken from Doherty et al. (2010) if not otherwise cited. Although the treatment here is necessarily brief, the reader can find citations for the source material and additional references within this chapter.
Use of Bloom's Taxonomy in Developing Reading Comprehension Specifications
ERIC Educational Resources Information Center
Luebke, Stephen; Lorie, James
2013-01-01
This article is a brief account of the use of Bloom's Taxonomy of Educational Objectives (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956) by staff of the Law School Admission Council in the 1990 development of redesigned specifications for the Reading Comprehension section of the Law School Admission Test. Summary item statistics for the…
ERIC Educational Resources Information Center
Edge, D. Michael
2011-01-01
This non-experimental study attempted to determine how the different prescribed mathematic tracks offered at a comprehensive technical high school influenced the mathematics performance of low-achieving students on standardized assessments of mathematics achievement. The goal was to provide an analysis of any statistically significant differences…
Cost-Effectiveness of Comprehensive School Reform in Low Achieving Schools
ERIC Educational Resources Information Center
Ross, John A.; Scott, Garth; Sibbald, Tim M.
2012-01-01
We evaluated the cost-effectiveness of Struggling Schools, a user-generated approach to Comprehensive School Reform implemented in 100 low achieving schools serving disadvantaged students in a Canadian province. The results show that while Struggling Schools had a statistically significant positive effect on Grade 3 Reading achievement, d = 0.48…
Oakton Community College Comprehensive Annual Financial Report, Fiscal Year Ended June 30, 1996.
ERIC Educational Resources Information Center
Hilquist, David E.
Consisting primarily of tables, this report provides financial data on Oakton Community College in Illinois for the fiscal year ending on June 30, 1996. This comprehensive annual financial report consists of an introductory section, financial section, statistical section, and special reports section. The introductory section includes a transmittal…
Effects of a Decoding Program on a Child with Autism Spectrum Disorder
ERIC Educational Resources Information Center
Infantino, Josephine; Hempenstall, Kerry
2006-01-01
This case study examined the effects of a parent-presented Direct Instruction decoding program on the reading and language skills of a child with high functioning Autism Spectrum Disorder. Following the 23 hour intervention, reading comprehension, listening comprehension and fluency skills improved to grade level, whilst statistically significant…
Bayesian Monte Carlo and Maximum Likelihood Approach for ...
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood estimation (BMCML) to calibrate a lake oxygen recovery model. We first derive an analytical solution of the differential equation governing lake-averaged oxygen dynamics as a function of time-variable wind speed. Statistical inferences on model parameters and predictive uncertainty are then drawn by Bayesian conditioning of the analytical solution on observed daily wind speed and oxygen concentration data obtained from an earlier study during two recovery periods on a eutrophic lake in upper state New York. The model is calibrated using oxygen recovery data for one year and statistical inferences were validated using recovery data for another year. Compared with essentially two-step, regression and optimization approach, the BMCML results are more comprehensive and performed relatively better in predicting the observed temporal dissolved oxygen levels (DO) in the lake. BMCML also produced comparable calibration and validation results with those obtained using popular Markov Chain Monte Carlo technique (MCMC) and is computationally simpler and easier to implement than the MCMC. Next, using the calibrated model, we derive an optimal relationship between liquid film-transfer coefficien
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. Dartevelle
2005-09-05
The objective of this manuscript is to fully derive a geophysical multiphase model able to ''accommodate'' different multiphase turbulence approaches; viz., the Reynolds Averaged Navier-Stokes (RANS), the Large Eddy Simulation (LES), or hybrid RANSLES. This manuscript is the first part of a larger geophysical multiphase project--lead by LANL--that aims to develop comprehensive modeling tools for large-scale, atmospheric, transient-buoyancy dusty jets and plume (e.g., plinian clouds, nuclear ''mushrooms'', ''supercell'' forest fire plumes) and for boundary-dominated geophysical multiphase gravity currents (e.g., dusty surges, diluted pyroclastic flows, dusty gravity currents in street canyons). LES is a partially deterministic approach constructed on either amore » spatial- or a temporal-separation between the large and small scales of the flow, whereas RANS is an entirely probabilistic approach constructed on a statistical separation between an ensemble-averaged mean and higher-order statistical moments (the so-called ''fluctuating parts''). Within this specific multiphase context, both turbulence approaches are built up upon the same phasic binary-valued ''function of presence''. This function of presence formally describes the occurrence--or not--of any phase at a given position and time and, therefore, allows to derive the same basic multiphase Navier-Stokes model for either the RANS or the LES frameworks. The only differences between these turbulence frameworks are the closures for the various ''turbulence'' terms involving the unknown variables from the fluctuating (RANS) or from the subgrid (LES) parts. Even though the hydrodynamic and thermodynamic models for RANS and LES have the same set of Partial Differential Equations, the physical interpretations of these PDEs cannot be the same, i.e., RANS models an averaged field, while LES simulates a filtered field. In this manuscript, we also demonstrate that this multiphase model fully fulfills the second law of thermodynamics and fulfills the necessary requirements for a well-posed initial-value problem. In the next manuscripts, we will further develop specific closures for multiphase RANS, LES, and hybrid-LES.« less
Time series modeling of human operator dynamics in manual control tasks
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.
1984-01-01
A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.
Time Series Modeling of Human Operator Dynamics in Manual Control Tasks
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.
1984-01-01
A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.
Mirkhani, Seyyed Alireza; Gharagheizi, Farhad; Sattari, Mehdi
2012-03-01
Evaluation of diffusion coefficients of pure compounds in air is of great interest for many diverse industrial and air quality control applications. In this communication, a QSPR method is applied to predict the molecular diffusivity of chemical compounds in air at 298.15K and atmospheric pressure. Four thousand five hundred and seventy nine organic compounds from broad spectrum of chemical families have been investigated to propose a comprehensive and predictive model. The final model is derived by Genetic Function Approximation (GFA) and contains five descriptors. Using this dedicated model, we obtain satisfactory results quantified by the following statistical results: Squared Correlation Coefficient=0.9723, Standard Deviation Error=0.003 and Average Absolute Relative Deviation=0.3% for the predicted properties from existing experimental values. Copyright © 2011 Elsevier Ltd. All rights reserved.
Elaborating Selected Statistical Concepts with Common Experience.
ERIC Educational Resources Information Center
Weaver, Kenneth A.
1992-01-01
Presents ways of elaborating statistical concepts so as to make course material more meaningful for students. Describes examples using exclamations, circus and cartoon characters, and falling leaves to illustrate variability, null hypothesis testing, and confidence interval. Concludes that the exercises increase student comprehension of the text…
NASA Astrophysics Data System (ADS)
Li, Jingwan; Sharma, Ashish; Evans, Jason; Johnson, Fiona
2018-01-01
Addressing systematic biases in regional climate model simulations of extreme rainfall is a necessary first step before assessing changes in future rainfall extremes. Commonly used bias correction methods are designed to match statistics of the overall simulated rainfall with observations. This assumes that change in the mix of different types of extreme rainfall events (i.e. convective and non-convective) in a warmer climate is of little relevance in the estimation of overall change, an assumption that is not supported by empirical or physical evidence. This study proposes an alternative approach to account for the potential change of alternate rainfall types, characterized here by synoptic weather patterns (SPs) using self-organizing maps classification. The objective of this study is to evaluate the added influence of SPs on the bias correction, which is achieved by comparing the corrected distribution of future extreme rainfall with that using conventional quantile mapping. A comprehensive synthetic experiment is first defined to investigate the conditions under which the additional information of SPs makes a significant difference to the bias correction. Using over 600,000 synthetic cases, statistically significant differences are found to be present in 46% cases. This is followed by a case study over the Sydney region using a high-resolution run of the Weather Research and Forecasting (WRF) regional climate model, which indicates a small change in the proportions of the SPs and a statistically significant change in the extreme rainfall over the region, although the differences between the changes obtained from the two bias correction methods are not statistically significant.
Associations between host characteristics and antimicrobial resistance of Salmonella typhimurium.
Ruddat, I; Tietze, E; Ziehm, D; Kreienbrock, L
2014-10-01
A collection of Salmonella Typhimurium isolates obtained from sporadic salmonellosis cases in humans from Lower Saxony, Germany between June 2008 and May 2010 was used to perform an exploratory risk-factor analysis on antimicrobial resistance (AMR) using comprehensive host information on sociodemographic attributes, medical history, food habits and animal contact. Multivariate resistance profiles of minimum inhibitory concentrations for 13 antimicrobial agents were analysed using a non-parametric approach with multifactorial models adjusted for phage types. Statistically significant associations were observed for consumption of antimicrobial agents, region type and three factors on egg-purchasing behaviour, indicating that besides antimicrobial use the proximity to other community members, health consciousness and other lifestyle-related attributes may play a role in the dissemination of resistances. Furthermore, a statistically significant increase in AMR from the first study year to the second year was observed.
Assessment of NDE Reliability Data
NASA Technical Reports Server (NTRS)
Yee, B. G. W.; Chang, F. H.; Couchman, J. C.; Lemon, G. H.; Packman, P. F.
1976-01-01
Twenty sets of relevant Nondestructive Evaluation (NDE) reliability data have been identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations has been formulated. A model to grade the quality and validity of the data sets has been developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, have been formulated for each NDE method. A comprehensive computer program has been written to calculate the probability of flaw detection at several confidence levels by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. Probability of detection curves at 95 and 50 percent confidence levels have been plotted for individual sets of relevant data as well as for several sets of merged data with common sets of NDE parameters.
A comprehensive model for predicting burnout in Korean nurses.
Lee, Haejung; Song, Rhayun; Cho, Young Suk; Lee, Gil Za; Daly, Barbara
2003-12-01
Although burnout among nurses has been studied in a great deal, this work has not included Korean nurses. Furthermore, the role of personal resources such as empathy and empowerment in predicting the variance in burnout has never been examined. The purpose of this study was to understand the phenomenon of burnout among Korean nurses. A comprehensive model of burnout was examined to identify significant predictors among individual characteristics, job stress and personal resource, with the intention of providing a basis for individual and organizational interventions to reduce levels of burnout experienced by Korean nurses. A cross-sectional correlational design was used. A sample of 178 nurses from general hospitals in southern Korea was surveyed from May 1999 to March 2000. The data were collected using paper and pencil self-rating questionnaires and analysed using descriptive statistics, Pearson correlations, and hierarchical multiple regression. Korean nurses reported higher levels of burnout than nurses in western countries such as Germany, Canada, the United Kingdom and the United States of America. Nurses who experienced higher job stress, showed lower cognitive empathy and empowerment, and worked in night shifts at tertiary hospitals were more likely to experience burnout. Identifying a comprehensive model of burnout among Korean nurses is an essential step to develop effective managerial strategies to reduce the problem. Suggestions to reduce the level of burnout include enhancing nurses' cognitive empathy and perceived power, providing clear job descriptions and work expectations, and exploring nurses' shift preferences, especially at tertiary hospitals. In future research we recommend recruiting nurses from broader geographical areas using random selection in order to increase the generalizability of the findings.
Seeking parsimony in hydrology and water resources technology
NASA Astrophysics Data System (ADS)
Koutsoyiannis, D.
2009-04-01
The principle of parsimony, also known as the principle of simplicity, the principle of economy and Ockham's razor, advises scientists to prefer the simplest theory among those that fit the data equally well. In this, it is an epistemic principle but reflects an ontological characterization that the universe is ultimately parsimonious. Is this principle useful and can it really be reconciled with, and implemented to, our modelling approaches of complex hydrological systems, whose elements and events are extraordinarily numerous, different and unique? The answer underlying the mainstream hydrological research of the last two decades seems to be negative. Hopes were invested to the power of computers that would enable faithful and detailed representation of the diverse system elements and the hydrological processes, based on merely "first principles" and resulting in "physically-based" models that tend to approach in complexity the real world systems. Today the account of such research endeavour seems not positive, as it did not improve model predictive capacity and processes comprehension. A return to parsimonious modelling seems to be again the promising route. The experience from recent research and from comparisons of parsimonious and complicated models indicates that the former can facilitate insight and comprehension, improve accuracy and predictive capacity, and increase efficiency. In addition - and despite aspiration that "physically based" models will have lower data requirements and, even, they ultimately become "data-free" - parsimonious models require fewer data to achieve the same accuracy with more complicated models. Naturally, the concepts that reconcile the simplicity of parsimonious models with the complexity of hydrological systems are probability theory and statistics. Probability theory provides the theoretical basis for moving from a microscopic to a macroscopic view of phenomena, by mapping sets of diverse elements and events of hydrological systems to single numbers (a probability or an expected value), and statistics provides the empirical basis of summarizing data, making inference from them, and supporting decision making in water resource management. Unfortunately, the current state of the art in probability, statistics and their union, often called stochastics, is not fully satisfactory for the needs of modelling of hydrological and water resource systems. A first problem is that stochastic modelling has traditionally relied on classical statistics, which is based on the independent "coin-tossing" prototype, rather than on the study of real-world systems whose behaviour is very different from the classical prototype. A second problem is that the stochastic models (particularly the multivariate ones) are often not parsimonious themselves. Therefore, substantial advancement of stochastics is necessary in a new paradigm of parsimonious hydrological modelling. These ideas are illustrated using several examples, namely: (a) hydrological modelling of a karst system in Bosnia and Herzegovina using three different approaches ranging from parsimonious to detailed "physically-based"; (b) parsimonious modelling of a peculiar modified catchment in Greece; (c) a stochastic approach that can replace parameter-excessive ARMA-type models with a generalized algorithm that produces any shape of autocorrelation function (consistent with the accuracy provided by the data) using a couple of parameters; (d) a multivariate stochastic approach which replaces a huge number of parameters estimated from data with coefficients estimated by the principle of maximum entropy; and (e) a parsimonious approach for decision making in multi-reservoir systems using a handful of parameters instead of thousands of decision variables.
GSuite HyperBrowser: integrative analysis of dataset collections across the genome and epigenome.
Simovski, Boris; Vodák, Daniel; Gundersen, Sveinung; Domanska, Diana; Azab, Abdulrahman; Holden, Lars; Holden, Marit; Grytten, Ivar; Rand, Knut; Drabløs, Finn; Johansen, Morten; Mora, Antonio; Lund-Andersen, Christin; Fromm, Bastian; Eskeland, Ragnhild; Gabrielsen, Odd Stokke; Ferkingstad, Egil; Nakken, Sigve; Bengtsen, Mads; Nederbragt, Alexander Johan; Thorarensen, Hildur Sif; Akse, Johannes Andreas; Glad, Ingrid; Hovig, Eivind; Sandve, Geir Kjetil
2017-07-01
Recent large-scale undertakings such as ENCODE and Roadmap Epigenomics have generated experimental data mapped to the human reference genome (as genomic tracks) representing a variety of functional elements across a large number of cell types. Despite the high potential value of these publicly available data for a broad variety of investigations, little attention has been given to the analytical methodology necessary for their widespread utilisation. We here present a first principled treatment of the analysis of collections of genomic tracks. We have developed novel computational and statistical methodology to permit comparative and confirmatory analyses across multiple and disparate data sources. We delineate a set of generic questions that are useful across a broad range of investigations and discuss the implications of choosing different statistical measures and null models. Examples include contrasting analyses across different tissues or diseases. The methodology has been implemented in a comprehensive open-source software system, the GSuite HyperBrowser. To make the functionality accessible to biologists, and to facilitate reproducible analysis, we have also developed a web-based interface providing an expertly guided and customizable way of utilizing the methodology. With this system, many novel biological questions can flexibly be posed and rapidly answered. Through a combination of streamlined data acquisition, interoperable representation of dataset collections, and customizable statistical analysis with guided setup and interpretation, the GSuite HyperBrowser represents a first comprehensive solution for integrative analysis of track collections across the genome and epigenome. The software is available at: https://hyperbrowser.uio.no. © The Author 2017. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Mandel, Kaisey; Kirshner, R. P.; Narayan, G.; Wood-Vasey, W. M.; Friedman, A. S.; Hicken, M.
2010-01-01
I have constructed a comprehensive statistical model for Type Ia supernova light curves spanning optical through near infrared data simultaneously. The near infrared light curves are found to be excellent standard candles (sigma(MH) = 0.11 +/- 0.03 mag) that are less vulnerable to systematic error from dust extinction, a major confounding factor for cosmological studies. A hierarchical statistical framework incorporates coherently multiple sources of randomness and uncertainty, including photometric error, intrinsic supernova light curve variations and correlations, dust extinction and reddening, peculiar velocity dispersion and distances, for probabilistic inference with Type Ia SN light curves. Inferences are drawn from the full probability density over individual supernovae and the SN Ia and dust populations, conditioned on a dataset of SN Ia light curves and redshifts. To compute probabilistic inferences with hierarchical models, I have developed BayeSN, a Markov Chain Monte Carlo algorithm based on Gibbs sampling. This code explores and samples the global probability density of parameters describing individual supernovae and the population. I have applied this hierarchical model to optical and near infrared data of over 100 nearby Type Ia SN from PAIRITEL, the CfA3 sample, and the literature. Using this statistical model, I find that SN with optical and NIR data have a smaller residual scatter in the Hubble diagram than SN with only optical data. The continued study of Type Ia SN in the near infrared will be important for improving their utility as precise and accurate cosmological distance indicators.
Bringing modeling to the masses: A web based system to predict potential species distributions
Graham, Jim; Newman, Greg; Kumar, Sunil; Jarnevich, Catherine S.; Young, Nick; Crall, Alycia W.; Stohlgren, Thomas J.; Evangelista, Paul
2010-01-01
Predicting current and potential species distributions and abundance is critical for managing invasive species, preserving threatened and endangered species, and conserving native species and habitats. Accurate predictive models are needed at local, regional, and national scales to guide field surveys, improve monitoring, and set priorities for conservation and restoration. Modeling capabilities, however, are often limited by access to software and environmental data required for predictions. To address these needs, we built a comprehensive web-based system that: (1) maintains a large database of field data; (2) provides access to field data and a wealth of environmental data; (3) accesses values in rasters representing environmental characteristics; (4) runs statistical spatial models; and (5) creates maps that predict the potential species distribution. The system is available online at www.niiss.org, and provides web-based tools for stakeholders to create potential species distribution models and maps under current and future climate scenarios.
Brown, Ted; Mapleston, Jennifer; Nairn, Allison; Molloy, Andrew
2013-03-01
Most individuals who have had a stroke present with some degree of residual cognitive and/or perceptual impairment. Occupational therapists often utilize standardized cognitive and perceptual assessments with clients to establish a baseline of skill performance as well as to inform goal setting and intervention planning. Being able to predict the functional independence of individuals who have had a stroke based on cognitive and perceptual impairments would assist with appropriate discharge planning and follow-up resource allocation. The study objective was to investigate the ability of the Developmental Test of Visual Perception - Adolescents and Adults (DTVP-A) and the Neurobehavioural Cognitive Status Exam (Cognistat) to predict the functional performance as measured by the Barthel Index of individuals who have had a stroke. Data was collected using the DTVP-A, Cognistat and the Barthal Index from 32 adults recovering from stroke. Two standard multiple regression models were used to determine predictive variables of the functional independence dependent variable. Both the Cognistat and DTVP-A had a statistically significant ability to predict functional performance (as measured by the Barthel Index) accounting for 64.4% and 27.9% of each regression model, respectively. Two Cognistat subscales (Comprehension [beta = 0.48; p < 0.001)] and Repetition [beta = 0.45; p < 0.004]) and one DTVP-A subscale (Copying [beta = 0.46; p < 0.014]) made statistically significant contributions to the regression models as independent variables. On the basis of the regression model findings, it appears that DTVP-A's Copying and the Cognistat's Comprehension and Repetition subscales are useful in predicting the functional independence (as measured by the Barthel Index) in those individuals who have had a stroke. Given the fundamental importance that cognition and perception has for one's ability to function independently, further investigation is warranted to determine other predictors of functional performance of individuals with a stroke. Copyright © 2012 John Wiley & Sons, Ltd.
Education Statistics Quarterly, Summer 2002.
ERIC Educational Resources Information Center
Dillow, Sally, Ed.
2002-01-01
This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data products, and funding opportunities developed over a 3-month period. Each issue also contains a message…
Education Statistics Quarterly, Spring 2002.
ERIC Educational Resources Information Center
Dillow, Sally, Ed.
2002-01-01
This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data products, and funding opportunities developed over a 3-month period. Each issue also contains a message…
NASA Astrophysics Data System (ADS)
York, Kathleen Christine
This mixed method study explored the relationship between metacomprehension strategy awareness and reading comprehension performance with narrative and science texts. Participants, 132 eighth-grade, predominately African American students, attending one middle school in a southeastern state, were administered a narrative and science version of the Metacomprehension Strategy Index (MSI) and asked to identify helpful strategic behaviors from six clustered subcategories (predicting and verifying; previewing; purpose setting; self-questioning; drawing from background knowledge; and summarizing and applying fix-up strategies). Participants also read and answered comprehension questions about narrative and science passages. Findings revealed no statistically significant differences in overall metacomprehension awareness with narrative and science texts. Statistically significant (p<.05) differences were found for two of the six subcategories, indicating students preview and set purpose more often with science than narrative texts. Findings also indicated overall narrative and science metacomprehension awareness and comprehension performance scores were statistically significantly (p<.01) related. Specifically, the category of summarizing and applying fix-up strategies was the strongest predictor of comprehension performance for both narrative and science texts. The qualitative phase of this study explored the relationship between metacomprehension awareness with narrative and science texts and the comprehension performance of six middle school students, three of whom scored high overall on the narrative and science text comprehension assessments in phase one of the study, and three of whom scored low. A qualitative analysis of multiple sources of data, including video-taped interviews and think-alouds, revealed the three high scoring participants engaged in competent school-based, metacognitive conversations infused with goal, self, and narrative talk and demonstrated multi-strategic engagements with narrative and science texts. In stark contrast, the three low scoring participants engaged in dissonant school-based talk infused with disclaimers, over-generalized, decontextualized, and literalized answers and demonstrated robotic, limited (primarily rereading and restating), and frustrated strategic acts when interacting with both narrative and science texts. The educational implications are discussed. This dissertation was funded by the Office of Special Education Programs, Federal Office Grant Award No. 324E031501.
Statistical Teleodynamics: Toward a Theory of Emergence.
Venkatasubramanian, Venkat
2017-10-24
The central scientific challenge of the 21st century is developing a mathematical theory of emergence that can explain and predict phenomena such as consciousness and self-awareness. The most successful research program of the 20th century, reductionism, which goes from the whole to parts, seems unable to address this challenge. This is because addressing this challenge inherently requires an opposite approach, going from parts to the whole. In addition, reductionism, by the very nature of its inquiry, typically does not concern itself with teleology or purposeful behavior. Modeling emergence, in contrast, requires the addressing of teleology. Together, these two requirements present a formidable challenge in developing a successful mathematical theory of emergence. In this article, I describe a new theory of emergence, called statistical teleodynamics, that addresses certain aspects of the general problem. Statistical teleodynamics is a mathematical framework that unifies three seemingly disparate domains-purpose-free entities in statistical mechanics, human engineered teleological systems in systems engineering, and nature-evolved teleological systems in biology and sociology-within the same conceptual formalism. This theory rests on several key conceptual insights, the most important one being the recognition that entropy mathematically models the concept of fairness in economics and philosophy and, equivalently, the concept of robustness in systems engineering. These insights help prove that the fairest inequality of income is a log-normal distribution, which will emerge naturally at equilibrium in an ideal free market society. Similarly, the theory predicts the emergence of the three classes of network organization-exponential, scale-free, and Poisson-seen widely in a variety of domains. Statistical teleodynamics is the natural generalization of statistical thermodynamics, the most successful parts-to-whole systems theory to date, but this generalization is only a modest step toward a more comprehensive mathematical theory of emergence.
NASA Astrophysics Data System (ADS)
Hendikawati, P.; Dewi, N. R.
2017-04-01
Statistics needed for use in the data analysis process and had a comprehensive implementation in daily life so that students must master the well statistical material. The use of Statistics textbook support with ICT and portfolio assessment approach was expected to help the students to improve mathematical connection skills. The subject of this research was 30 student teachers who take Statistics courses. The results of this research are the use of Statistics textbook support with ICT and portfolio assessment approach can improve students mathematical connection skills.
Boehm, Udo; Steingroever, Helen; Wagenmakers, Eric-Jan
2018-06-01
An important tool in the advancement of cognitive science are quantitative models that represent different cognitive variables in terms of model parameters. To evaluate such models, their parameters are typically tested for relationships with behavioral and physiological variables that are thought to reflect specific cognitive processes. However, many models do not come equipped with the statistical framework needed to relate model parameters to covariates. Instead, researchers often revert to classifying participants into groups depending on their values on the covariates, and subsequently comparing the estimated model parameters between these groups. Here we develop a comprehensive solution to the covariate problem in the form of a Bayesian regression framework. Our framework can be easily added to existing cognitive models and allows researchers to quantify the evidential support for relationships between covariates and model parameters using Bayes factors. Moreover, we present a simulation study that demonstrates the superiority of the Bayesian regression framework to the conventional classification-based approach.
Kralj, Damir; Kern, Josipa; Tonkovic, Stanko; Koncar, Miroslav
2015-09-09
Family medicine practices (FMPs) make the basis for the Croatian health care system. Use of electronic health record (EHR) software is mandatory and it plays an important role in running these practices, but important functional features still remain uneven and largely left to the will of the software developers. The objective of this study was to develop a novel and comprehensive model for functional evaluation of the EHR software in FMPs, based on current world standards, models and projects, as well as on actual user satisfaction and requirements. Based on previous theoretical and experimental research in this area, we made the initial framework model consisting of six basic categories as a base for online survey questionnaire. Family doctors assessed perceived software quality by using a five-point Likert-type scale. Using exploratory factor analysis and appropriate statistical methods over the collected data, the final optimal structure of the novel model was formed. Special attention was focused on the validity and quality of the novel model. The online survey collected a total of 384 cases. The obtained results indicate both the quality of the assessed software and the quality in use of the novel model. The intense ergonomic orientation of the novel measurement model was particularly emphasised. The resulting novel model is multiple validated, comprehensive and universal. It could be used to assess the user-perceived quality of almost all forms of the ambulatory EHR software and therefore useful to all stakeholders in this area of the health care informatisation.
Impact of 2 Successive Smoking Bans on Hospital Admissions for Cardiovascular Diseases in Spain.
Galán, Iñaki; Simón, Lorena; Boldo, Elena; Ortiz, Cristina; Medrano, María José; Fernández-Cuenca, Rafael; Linares, Cristina; Pastor-Barriuso, Roberto
2018-04-16
To evaluate the impact of 2 smoking bans enacted in 2006 (partial ban) and 2011 (comprehensive ban) on hospitalizations for cardiovascular disease in the Spanish adult population. The study was performed in 14 provinces in Spain. Hospital admission records were collected for acute myocardial infarction (AMI), ischemic heart disease (IHD), and cerebrovascular disease (CVD) in patients aged ≥ 18 years from 2003 through 2012. We estimated immediate and 1-year effects with segmented-linear models. The coefficients for each province were combined using random-effects multivariate meta-analysis models. Overall, changes in admission rates immediately following the implementation of the partial ban and 1 year later were -1.8% and +1.2% for AMI, +0.1 and +0.4% for IHD, and +1.0% and +2.8% for CVD (P>.05). After the comprehensive ban, immediate changes were -2.3% for AMI, -2.6% for IHD, and -0.8% for CVD (P>.05), only to return to precomprehensive ban values 1 year later. For patients aged ≥ 65 years of age, immediate changes associated with the comprehensive ban were -5.0%, -3.9%, and -2.3% for AMI, IHD, and CVD, respectively (P<.05). Again, the 1-year changes were not statistically significant. In Spain, smoking bans failed to significantly reduce hospitalizations for AMI, IHD, or CVD among patients ≥ 18 years of age. In the population aged ≥ 65 years, hospital admissions due to these diseases showed significant decreases immediately after the implementation of the comprehensive ban, but these reductions disappeared at the 1-year evaluation. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Li, Jie; Huang, Yuan-Guang; Ran, Mao-Sheng; Fan, Yu; Chen, Wen; Evans-Lacko, Sara; Thornicroft, Graham
2018-04-01
Comprehensive interventions including components of stigma and discrimination reduction in schizophrenia in low- and middle-income countries (LMICs) are lacking. We developed a community-based comprehensive intervention to evaluate its effects on clinical symptoms, social functioning, internalized stigma and discrimination among patients with schizophrenia. A randomized controlled trial including an intervention group (n = 169) and a control group (n = 158) was performed. The intervention group received comprehensive intervention (strategies against stigma and discrimination, psycho-education, social skills training and cognitive behavioral therapy) and the control group received face to face interview. Both lasted for nine months. Participants were measured at baseline, 6 months and 9 months using the Internalized Stigma of Mental Illness scale (ISMI), Discrimination and Stigma Scale (DISC-12), Global Assessment of Functioning (GAF), Schizophrenia Quality of Life Scale (SQLS), Self-Esteem Scale (SES), Brief Psychiatric Rating Scale (BPRS) and PANSS negative scale (PANSS-N). Insight and medication compliance were evaluated by senior psychiatrists. Data were analyzed by descriptive statistics, t-test, chi-square test or Fisher's exact test. Linear Mixed Models were used to show intervention effectiveness on scales. General Linear Mixed Models with multinomial logistic link function were used to assess the effectiveness on medication compliance and insight. We found a significant reduction on anticipated discrimination, BPRS and PANSS-N total scores, and an elevation on overcoming stigma and GAF in the intervention group after 9 months. These suggested the intervention may be effective in reducing anticipated discrimination, increasing skills overcoming stigma as well as improving clinical symptoms and social functioning in Chinese patients with schizophrenia. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Shintani, Natsuko; Li, Shaofeng; Ellis, Rod
2013-01-01
This article reports a meta-analysis of studies that investigated the relative effectiveness of comprehension-based instruction (CBI) and production-based instruction (PBI). The meta-analysis only included studies that featured a direct comparison of CBI and PBI in order to ensure methodological and statistical robustness. A total of 35 research…
ERIC Educational Resources Information Center
Abbott, Robert D.; Fayol, Michel; Zorman, Michel; Casalis, Séverine; Nagy, William; Berninger, Virginia W.
2016-01-01
Two longitudinal studies of word reading, spelling, and reading comprehension identified commonalities and differences in morphophonemic orthographies--French (Study 1, n = 1,313) or English (Study 2, n = 114) in early childhood (Grade 2)and middle childhood (Grade 5). For French and English, statistically significant concurrent relationships…
Developing a Test for Assessing Elementary Students' Comprehension of Science Texts
ERIC Educational Resources Information Center
Wang, Jing-Ru; Chen, Shin-Feng; Tsay, Reuy-Fen; Chou, Ching-Ting; Lin, Sheau-Wen; Kao, Huey-Lien
2012-01-01
This study reports on the process of developing a test to assess students' reading comprehension of scientific materials and on the statistical results of the verification study. A combination of classic test theory and item response theory approaches was used to analyze the assessment data from a verification study. Data analysis indicates the…
Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems
NASA Astrophysics Data System (ADS)
Koch, Patrick Nathan
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.
Large-scale gene function analysis with the PANTHER classification system.
Mi, Huaiyu; Muruganujan, Anushya; Casagrande, John T; Thomas, Paul D
2013-08-01
The PANTHER (protein annotation through evolutionary relationship) classification system (http://www.pantherdb.org/) is a comprehensive system that combines gene function, ontology, pathways and statistical analysis tools that enable biologists to analyze large-scale, genome-wide data from sequencing, proteomics or gene expression experiments. The system is built with 82 complete genomes organized into gene families and subfamilies, and their evolutionary relationships are captured in phylogenetic trees, multiple sequence alignments and statistical models (hidden Markov models or HMMs). Genes are classified according to their function in several different ways: families and subfamilies are annotated with ontology terms (Gene Ontology (GO) and PANTHER protein class), and sequences are assigned to PANTHER pathways. The PANTHER website includes a suite of tools that enable users to browse and query gene functions, and to analyze large-scale experimental data with a number of statistical tests. It is widely used by bench scientists, bioinformaticians, computer scientists and systems biologists. In the 2013 release of PANTHER (v.8.0), in addition to an update of the data content, we redesigned the website interface to improve both user experience and the system's analytical capability. This protocol provides a detailed description of how to analyze genome-wide experimental data with the PANTHER classification system.
Nelli, Jennifer M; Nicholson, Keith; Lakha, S Fatima; Louffat, Ada F; Chapparo, Luis; Furlan, Julio; Mailis-Gagnon, Angela
2012-01-01
BACKGROUND: With increasing knowledge of chronic pain, clinicians have attempted to assess chronic pain patients with lengthy assessment tools. OBJECTIVES: To describe the functional and emotional status of patients presenting to a tertiary care pain clinic; to assess the reliability and validity of a diagnostic classification system for chronic pain patients modelled after the Multidimensional Pain Inventory; to provide psychometric data on a modified Comprehensive Pain Evaluation Questionnaire (CPEQ); and to evaluate the relationship between the modified CPEQ construct scores and clusters with Diagnostic and Statistical Manual, Fourth Edition – Text Revision Pain Disorder diagnoses. METHODS: Data on 300 new patients over the course of nine months were collected using standardized assessment procedures plus a modified CPEQ at the Comprehensive Pain Program, Toronto Western Hospital, Toronto, Ontario. RESULTS: Cluster analysis of the modified CPEQ revealed three patient profiles, labelled Adaptive Copers, Dysfunctional, and Interpersonally Distressed, which closely resembled those previously reported. The distribution of modified CPEQ construct T scores across profile subtypes was similar to that previously reported for the original CPEQ. A novel finding was that of a strong relationship between the modified CPEQ clusters and constructs with Diagnostic and Statistical Manual, Fourth Edition – Text Revision Pain Disorder diagnoses. DISCUSSION AND CONCLUSIONS: The CPEQ, either the original or modified version, yields reproducible results consistent with the results of other studies. This technique may usefully classify chronic pain patients, but more work is needed to determine the meaning of the CPEQ clusters, what psychological or biomedical variables are associated with CPEQ constructs or clusters, and whether this instrument may assist in treatment planning or predict response to treatment. PMID:22518368
Texas Academic Library Statistics, 1986.
ERIC Educational Resources Information Center
Texas State Library, Austin. Dept. of Library Development.
This publication is the latest in a series of annual publications which are intended to provide a comprehensive source of statistics on academic libraries in Texas. The report is divided into four sections containing data on four-year public institutions, four-year private institutions, two-year colleges (both public and private), and law schools…
Education Statistics Quarterly, Fall 2002.
ERIC Educational Resources Information Center
Dillow, Sally, Ed.
2003-01-01
This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released in a 3-month period. Each issue also contains a message from the NCES on a timely…
Statistical Tables on Manpower.
ERIC Educational Resources Information Center
Manpower Administration (DOL), Washington, DC.
The President sends to the Congress each year a report on the Nation's manpower, as required by the Manpower Development and Training Act of 1962, which includes a comprehensive report by the Department of Labor on manpower requirements, resources, utilization, and training. This statistical appendix to the Department of Labor report presents data…
Education Statistics Quarterly, Fall 2001.
ERIC Educational Resources Information Center
Dillow, Sally, Ed.
2001-01-01
The publication gives a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data products, and funding opportunities developed over a 3-month period. Each issue also contains a message from…
Higher Education in the U.S.S.R.: Curriculums, Schools, and Statistics.
ERIC Educational Resources Information Center
Rosen, Seymour M.
This study is designed to provide more comprehensive information on Soviet higher learning emphasizing its increasingly close alignment with Soviet national planning and economy. Following introductory material, Soviet curriculums in higher education and schools and statistics are reviewed. Highlights include: (1) A major development in Soviet…
Education Statistics Quarterly. Volume 5, Issue 1.
ERIC Educational Resources Information Center
Dillow, Sally, Ed.
2003-01-01
This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data product, and funding opportunities developed over a 3-month period. Each issue also contains a message…
Education Statistics Quarterly, Winter 2001.
ERIC Educational Resources Information Center
Dillow, Sally, Ed.
2002-01-01
This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released in a 3-month period. Each issue also contains a message from the NCES on a timely…
Kwan, Paul; Welch, Mitchell
2017-01-01
In order to understand the distribution and prevalence of Ommatissus lybicus (Hemiptera: Tropiduchidae) as well as analyse their current biographical patterns and predict their future spread, comprehensive and detailed information on the environmental, climatic, and agricultural practices are essential. The spatial analytical techniques such as Remote Sensing and Spatial Statistics Tools, can help detect and model spatial links and correlations between the presence, absence and density of O. lybicus in response to climatic, environmental, and human factors. The main objective of this paper is to review remote sensing and relevant analytical techniques that can be applied in mapping and modelling the habitat and population density of O. lybicus. An exhaustive search of related literature revealed that there are very limited studies linking location-based infestation levels of pests like the O. lybicus with climatic, environmental, and human practice related variables. This review also highlights the accumulated knowledge and addresses the gaps in this area of research. Furthermore, it makes recommendations for future studies, and gives suggestions on monitoring and surveillance methods in designing both local and regional level integrated pest management strategies of palm tree and other affected cultivated crops. PMID:28875085
Al-Kindi, Khalifa M; Kwan, Paul; R Andrew, Nigel; Welch, Mitchell
2017-01-01
In order to understand the distribution and prevalence of Ommatissus lybicus (Hemiptera: Tropiduchidae) as well as analyse their current biographical patterns and predict their future spread, comprehensive and detailed information on the environmental, climatic, and agricultural practices are essential. The spatial analytical techniques such as Remote Sensing and Spatial Statistics Tools, can help detect and model spatial links and correlations between the presence, absence and density of O. lybicus in response to climatic, environmental, and human factors. The main objective of this paper is to review remote sensing and relevant analytical techniques that can be applied in mapping and modelling the habitat and population density of O. lybicus . An exhaustive search of related literature revealed that there are very limited studies linking location-based infestation levels of pests like the O. lybicus with climatic, environmental, and human practice related variables. This review also highlights the accumulated knowledge and addresses the gaps in this area of research. Furthermore, it makes recommendations for future studies, and gives suggestions on monitoring and surveillance methods in designing both local and regional level integrated pest management strategies of palm tree and other affected cultivated crops.
Statistical Hierarchy of Varying Speed of Light Cosmologies
NASA Astrophysics Data System (ADS)
Salzano, Vincenzo; Da¸browski, Mariusz P.
2017-12-01
Many varying speed of light (VSL) theories have been developed recently. Here we address the issue of their observational verification in a fully comprehensive way. By using the most updated cosmological probes, we test three different candidates for a VSL theory (Barrow & Magueijo, Avelino & Martins, and Moffat). We consider many different Ansätze for both the functional form of c(z) and the dark energy dynamics. We compare these results using a reliable statistical tool such as the Bayesian evidence. We find that the present cosmological data are perfectly compatible with any of these VSL scenarios, but for the Moffat model there is a higher Bayesian evidence ratio in favor of VSL rather than the c = constant ΛCDM scenario. Moreover, in such a scenario, the VSL signal can help to strengthen constraints on the spatial curvature (with indication toward an open universe), to clarify some properties of dark energy (exclusion of a cosmological constant at 2σ level), and is also falsifiable in the near future owing to peculiar issues that differentiate this model from the standard one. Finally, we apply an information prior and entropy prior in order to put physical constraints on the models, though still in favor Moffat’s proposal.
Morphological representation of order-statistics filters.
Charif-Chefchaouni, M; Schonfeld, D
1995-01-01
We propose a comprehensive theory for the morphological bounds on order-statistics filters (and their repeated iterations). Conditions are derived for morphological openings and closings to serve as bounds (lower and upper, respectively) on order-statistics filters (and their repeated iterations). Under various assumptions, morphological open-closings and close-openings are also shown to serve as (tighter) bounds (lower and upper, respectively) on iterations of order-statistics filters. Simulations of the application of the results presented to image restoration are finally provided.
Wieser, Stefan; Axmann, Markus; Schütz, Gerhard J.
2008-01-01
We propose here an approach for the analysis of single-molecule trajectories which is based on a comprehensive comparison of an experimental data set with multiple Monte Carlo simulations of the diffusion process. It allows quantitative data analysis, particularly whenever analytical treatment of a model is infeasible. Simulations are performed on a discrete parameter space and compared with the experimental results by a nonparametric statistical test. The method provides a matrix of p-values that assess the probability for having observed the experimental data at each setting of the model parameters. We show the testing approach for three typical situations observed in the cellular plasma membrane: i), free Brownian motion of the tracer, ii), hop diffusion of the tracer in a periodic meshwork of squares, and iii), transient binding of the tracer to slowly diffusing structures. By plotting the p-value as a function of the model parameters, one can easily identify the most consistent parameter settings but also recover mutual dependencies and ambiguities which are difficult to determine by standard fitting routines. Finally, we used the test to reanalyze previous data obtained on the diffusion of the glycosylphosphatidylinositol-protein CD59 in the plasma membrane of the human T24 cell line. PMID:18805933
Loley, Christina; Alver, Maris; Assimes, Themistocles L; Bjonnes, Andrew; Goel, Anuj; Gustafsson, Stefan; Hernesniemi, Jussi; Hopewell, Jemma C; Kanoni, Stavroula; Kleber, Marcus E; Lau, King Wai; Lu, Yingchang; Lyytikäinen, Leo-Pekka; Nelson, Christopher P; Nikpay, Majid; Qu, Liming; Salfati, Elias; Scholz, Markus; Tukiainen, Taru; Willenborg, Christina; Won, Hong-Hee; Zeng, Lingyao; Zhang, Weihua; Anand, Sonia S; Beutner, Frank; Bottinger, Erwin P; Clarke, Robert; Dedoussis, George; Do, Ron; Esko, Tõnu; Eskola, Markku; Farrall, Martin; Gauguier, Dominique; Giedraitis, Vilmantas; Granger, Christopher B; Hall, Alistair S; Hamsten, Anders; Hazen, Stanley L; Huang, Jie; Kähönen, Mika; Kyriakou, Theodosios; Laaksonen, Reijo; Lind, Lars; Lindgren, Cecilia; Magnusson, Patrik K E; Marouli, Eirini; Mihailov, Evelin; Morris, Andrew P; Nikus, Kjell; Pedersen, Nancy; Rallidis, Loukianos; Salomaa, Veikko; Shah, Svati H; Stewart, Alexandre F R; Thompson, John R; Zalloua, Pierre A; Chambers, John C; Collins, Rory; Ingelsson, Erik; Iribarren, Carlos; Karhunen, Pekka J; Kooner, Jaspal S; Lehtimäki, Terho; Loos, Ruth J F; März, Winfried; McPherson, Ruth; Metspalu, Andres; Reilly, Muredach P; Ripatti, Samuli; Sanghera, Dharambir K; Thiery, Joachim; Watkins, Hugh; Deloukas, Panos; Kathiresan, Sekar; Samani, Nilesh J; Schunkert, Heribert; Erdmann, Jeanette; König, Inke R
2016-10-12
In recent years, genome-wide association studies have identified 58 independent risk loci for coronary artery disease (CAD) on the autosome. However, due to the sex-specific data structure of the X chromosome, it has been excluded from most of these analyses. While females have 2 copies of chromosome X, males have only one. Also, one of the female X chromosomes may be inactivated. Therefore, special test statistics and quality control procedures are required. Thus, little is known about the role of X-chromosomal variants in CAD. To fill this gap, we conducted a comprehensive X-chromosome-wide meta-analysis including more than 43,000 CAD cases and 58,000 controls from 35 international study cohorts. For quality control, sex-specific filters were used to adequately take the special structure of X-chromosomal data into account. For single study analyses, several logistic regression models were calculated allowing for inactivation of one female X-chromosome, adjusting for sex and investigating interactions between sex and genetic variants. Then, meta-analyses including all 35 studies were conducted using random effects models. None of the investigated models revealed genome-wide significant associations for any variant. Although we analyzed the largest-to-date sample, currently available methods were not able to detect any associations of X-chromosomal variants with CAD.
Pianowski, Giselle; Meyer, Gregory J; Villemor-Amaral, Anna Elisa de
2016-01-01
Exner ( 1989 ) and Weiner ( 2003 ) identified 3 types of Rorschach codes that are most likely to contain personally relevant projective material: Distortions, Movement, and Embellishments. We examine how often these types of codes occur in normative data and whether their frequency changes for the 1st, 2nd, 3rd, 4th, or last response to a card. We also examine the impact on these variables of the Rorschach Performance Assessment System's (R-PAS) statistical modeling procedures that convert the distribution of responses (R) from Comprehensive System (CS) administered protocols to match the distribution of R found in protocols obtained using R-optimized administration guidelines. In 2 normative reference databases, the results indicated that about 40% of responses (M = 39.25) have 1 type of code, 15% have 2 types, and 1.5% have all 3 types, with frequencies not changing by response number. In addition, there were no mean differences in the original CS and R-optimized modeled records (M Cohen's d = -0.04 in both databases). When considered alongside findings showing minimal differences between the protocols of people randomly assigned to CS or R-optimized administration, the data suggest R-optimized administration should not alter the extent to which potential projective material is present in a Rorschach protocol.
Chang, S Q; Williams, R L; McLaughlin, T F
1983-01-01
The purpose of this study was to evaluate the effectiveness of oral reading as a teaching technique for improving reading comprehension of 11 Educable Mentally Handicapped or Severe Learning Disabled adolescents. Students were tested on their ability to answer comprehension questions from a short factual article. Comprehension improved following the oral reading for students with a reading grade equivalent of less than 5.5 (measured from the Wide Range Achievement Test) but not for those students having a grade equivalent of greater than 5.5. This association was statistically significant (p = less than .01). Oral reading appeared to improve comprehension among the poorer readers but not for readers with moderately high ability.
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian
2016-04-01
Understanding the dynamic behavior of complex structures such as long-span bridges requires dense deployment of sensors. Traditional wired sensor systems are generally expensive and time-consuming to install due to cabling. With wireless communication and on-board computation capabilities, wireless smart sensor networks have the advantages of being low cost, easy to deploy and maintain and therefore facilitate dense instrumentation for structural health monitoring. A long-term monitoring project was recently carried out for a cable-stayed bridge in South Korea with a dense array of 113 smart sensors, which feature the world's largest wireless smart sensor network for civil structural monitoring. This paper presents a comprehensive statistical analysis of the modal properties including natural frequencies, damping ratios and mode shapes of the monitored cable-stayed bridge. Data analyzed in this paper is composed of structural vibration signals monitored during a 12-month period under ambient excitations. The correlation between environmental temperature and the modal frequencies is also investigated. The results showed the long-term statistical structural behavior of the bridge, which serves as the basis for Bayesian statistical updating for the numerical model.
A Guideline to Univariate Statistical Analysis for LC/MS-Based Untargeted Metabolomics-Derived Data
Vinaixa, Maria; Samino, Sara; Saez, Isabel; Duran, Jordi; Guinovart, Joan J.; Yanes, Oscar
2012-01-01
Several metabolomic software programs provide methods for peak picking, retention time alignment and quantification of metabolite features in LC/MS-based metabolomics. Statistical analysis, however, is needed in order to discover those features significantly altered between samples. By comparing the retention time and MS/MS data of a model compound to that from the altered feature of interest in the research sample, metabolites can be then unequivocally identified. This paper reports on a comprehensive overview of a workflow for statistical analysis to rank relevant metabolite features that will be selected for further MS/MS experiments. We focus on univariate data analysis applied in parallel on all detected features. Characteristics and challenges of this analysis are discussed and illustrated using four different real LC/MS untargeted metabolomic datasets. We demonstrate the influence of considering or violating mathematical assumptions on which univariate statistical test rely, using high-dimensional LC/MS datasets. Issues in data analysis such as determination of sample size, analytical variation, assumption of normality and homocedasticity, or correction for multiple testing are discussed and illustrated in the context of our four untargeted LC/MS working examples. PMID:24957762
A Guideline to Univariate Statistical Analysis for LC/MS-Based Untargeted Metabolomics-Derived Data.
Vinaixa, Maria; Samino, Sara; Saez, Isabel; Duran, Jordi; Guinovart, Joan J; Yanes, Oscar
2012-10-18
Several metabolomic software programs provide methods for peak picking, retention time alignment and quantification of metabolite features in LC/MS-based metabolomics. Statistical analysis, however, is needed in order to discover those features significantly altered between samples. By comparing the retention time and MS/MS data of a model compound to that from the altered feature of interest in the research sample, metabolites can be then unequivocally identified. This paper reports on a comprehensive overview of a workflow for statistical analysis to rank relevant metabolite features that will be selected for further MS/MS experiments. We focus on univariate data analysis applied in parallel on all detected features. Characteristics and challenges of this analysis are discussed and illustrated using four different real LC/MS untargeted metabolomic datasets. We demonstrate the influence of considering or violating mathematical assumptions on which univariate statistical test rely, using high-dimensional LC/MS datasets. Issues in data analysis such as determination of sample size, analytical variation, assumption of normality and homocedasticity, or correction for multiple testing are discussed and illustrated in the context of our four untargeted LC/MS working examples.
Statistical Methods Applied to Gamma-ray Spectroscopy Algorithms in Nuclear Security Missions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagan, Deborah K.; Robinson, Sean M.; Runkle, Robert C.
2012-10-01
In a wide range of nuclear security missions, gamma-ray spectroscopy is a critical research and development priority. One particularly relevant challenge is the interdiction of special nuclear material for which gamma-ray spectroscopy supports the goals of detecting and identifying gamma-ray sources. This manuscript examines the existing set of spectroscopy methods, attempts to categorize them by the statistical methods on which they rely, and identifies methods that have yet to be considered. Our examination shows that current methods effectively estimate the effect of counting uncertainty but in many cases do not address larger sources of decision uncertainty—ones that are significantly moremore » complex. We thus explore the premise that significantly improving algorithm performance requires greater coupling between the problem physics that drives data acquisition and statistical methods that analyze such data. Untapped statistical methods, such as Bayes Modeling Averaging and hierarchical and empirical Bayes methods have the potential to reduce decision uncertainty by more rigorously and comprehensively incorporating all sources of uncertainty. We expect that application of such methods will demonstrate progress in meeting the needs of nuclear security missions by improving on the existing numerical infrastructure for which these analyses have not been conducted.« less
NASA Astrophysics Data System (ADS)
Saputra, K. V. I.; Cahyadi, L.; Sembiring, U. A.
2018-01-01
Start in this paper, we assess our traditional elementary statistics education and also we introduce elementary statistics with simulation-based inference. To assess our statistical class, we adapt the well-known CAOS (Comprehensive Assessment of Outcomes in Statistics) test that serves as an external measure to assess the student’s basic statistical literacy. This test generally represents as an accepted measure of statistical literacy. We also introduce a new teaching method on elementary statistics class. Different from the traditional elementary statistics course, we will introduce a simulation-based inference method to conduct hypothesis testing. From the literature, it has shown that this new teaching method works very well in increasing student’s understanding of statistics.
NASA Astrophysics Data System (ADS)
Bramwell, Steven T.; Gingras, Michel J. P.; Holdsworth, Peter C. W.
2013-03-01
Pauling's model of hydrogen disorder in water ice represents the prototype of a frustrated system. Over the years it has spawned several analogous models, including Anderson's model antiferromagnet and the statistical "vertex" models. Spin Ice is a sixteen vertex model of "ferromagnetic frustration" that is approximated by real materials, most notably the rare earth pyrochlores Ho2Ti2O7, Dy2Ti2O7 and Ho2Sn2O7. These "spin ice materials" have the Pauling zero point entropy and in all respects represent almost ideal realisations of Pauling's model. They provide experimentalists with unprecedented access to a wide variety of novel magnetic states and phase transitions that are located in different regions of the field-temperature phase diagram. They afford theoreticians the opportunity to explore many new features of the magnetic interactions and statistical mechanics of frustrated systems. This chapter is a comprehensive review of the physics -- both experimental and theoretical -- of spin ice. It starts with a discussion of the historic problem of water ice and its relation to spin ice and other frustrated magnets. The properties of spin ice are then discussed in three sections that deal with the zero field spin ice state, the numerous field-induced states (including the recently identified "kagomé ice") and the magnetic dynamics. Some materials related to spin ice are briefly described and the chapter is concluded with a short summary of spin ice physics.
NASA Astrophysics Data System (ADS)
Cenek, Martin; Dahl, Spencer K.
2016-11-01
Systems with non-linear dynamics frequently exhibit emergent system behavior, which is important to find and specify rigorously to understand the nature of the modeled phenomena. Through this analysis, it is possible to characterize phenomena such as how systems assemble or dissipate and what behaviors lead to specific final system configurations. Agent Based Modeling (ABM) is one of the modeling techniques used to study the interaction dynamics between a system's agents and its environment. Although the methodology of ABM construction is well understood and practiced, there are no computational, statistically rigorous, comprehensive tools to evaluate an ABM's execution. Often, a human has to observe an ABM's execution in order to analyze how the ABM functions, identify the emergent processes in the agent's behavior, or study a parameter's effect on the system-wide behavior. This paper introduces a new statistically based framework to automatically analyze agents' behavior, identify common system-wide patterns, and record the probability of agents changing their behavior from one pattern of behavior to another. We use network based techniques to analyze the landscape of common behaviors in an ABM's execution. Finally, we test the proposed framework with a series of experiments featuring increasingly emergent behavior. The proposed framework will allow computational comparison of ABM executions, exploration of a model's parameter configuration space, and identification of the behavioral building blocks in a model's dynamics.
Cenek, Martin; Dahl, Spencer K
2016-11-01
Systems with non-linear dynamics frequently exhibit emergent system behavior, which is important to find and specify rigorously to understand the nature of the modeled phenomena. Through this analysis, it is possible to characterize phenomena such as how systems assemble or dissipate and what behaviors lead to specific final system configurations. Agent Based Modeling (ABM) is one of the modeling techniques used to study the interaction dynamics between a system's agents and its environment. Although the methodology of ABM construction is well understood and practiced, there are no computational, statistically rigorous, comprehensive tools to evaluate an ABM's execution. Often, a human has to observe an ABM's execution in order to analyze how the ABM functions, identify the emergent processes in the agent's behavior, or study a parameter's effect on the system-wide behavior. This paper introduces a new statistically based framework to automatically analyze agents' behavior, identify common system-wide patterns, and record the probability of agents changing their behavior from one pattern of behavior to another. We use network based techniques to analyze the landscape of common behaviors in an ABM's execution. Finally, we test the proposed framework with a series of experiments featuring increasingly emergent behavior. The proposed framework will allow computational comparison of ABM executions, exploration of a model's parameter configuration space, and identification of the behavioral building blocks in a model's dynamics.
ERIC Educational Resources Information Center
Noser, Thomas C.; Tanner, John R.; Shah, Situl
2008-01-01
The purpose of this study was to measure the comprehension of basic mathematical skills of students enrolled in statistics classes at a large regional university, and to determine if the scores earned on a basic math skills test are useful in forecasting student performance in these statistics classes, and to determine if students' basic math…
Do employers reward physical attractiveness in transition countries?
Mavisakalyan, Astghik
2018-02-01
This paper studies the labour market returns to physical attractiveness using data from three transition countries of the Caucasus: Armenia, Azerbaijan and Georgia. I estimate a large positive effect of attractive looks on males' probability of employment. Results from the most comprehensive model suggest a marginal effect of 11.1 percentage points. Using a partial identification approach, I show that this relationship is likely to be causal. After accounting for covariates, particularly measures of human capital, there is no evidence for a statistically significant link between females' attractiveness and employment. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Jinxin; Clarkson, Eric; Kupinski, Matthew; Rolland, Jannick P.
2014-03-01
The prevalence of Dry Eye Disease (DED) in the USA is approximately 40 million in aging adults with about $3.8 billion economic burden. However, a comprehensive understanding of tear film dynamics, which is the prerequisite to advance the management of DED, is yet to be realized. To extend our understanding of tear film dynamics, we investigate the simultaneous estimation of the lipid and aqueous layers thicknesses with the combination of optical coherence tomography (OCT) and statistical decision theory. In specific, we develop a mathematical model for Fourier-domain OCT where we take into account the different statistical processes associated with the imaging chain. We formulate the first-order and second-order statistical quantities of the output of the OCT system, which can generate some simulated OCT spectra. A tear film model, which includes a lipid and aqueous layer on top of a rough corneal surface, is the object being imaged. Then we further implement a Maximum-likelihood (ML) estimator to interpret the simulated OCT data to estimate the thicknesses of both layers of the tear film. Results show that an axial resolution of 1 μm allows estimates down to nanometers scale. We use the root mean square error of the estimates as a metric to evaluate the system parameters, such as the tradeoff between the imaging speed and the precision of estimation. This framework further provides the theoretical basics to optimize the imaging setup for a specific thickness estimation task.
Research on Bidding Decision-making of International Public-Private Partnership Projects
NASA Astrophysics Data System (ADS)
Hu, Zhen Yu; Zhang, Shui Bo; Liu, Xin Yan
2018-06-01
In order to select the optimal quasi-bidding project for an investment enterprise, a bidding decision-making model for international PPP projects was established in this paper. Firstly, the literature frequency statistics method was adopted to screen out the bidding decision-making indexes, and accordingly the bidding decision-making index system for international PPP projects was constructed. Then, the group decision-making characteristic root method, the entropy weight method, and the optimization model based on least square method were used to set the decision-making index weights. The optimal quasi-bidding project was thus determined by calculating the consistent effect measure of each decision-making index value and the comprehensive effect measure of each quasi-bidding project. Finally, the bidding decision-making model for international PPP projects was further illustrated by a hypothetical case. This model can effectively serve as a theoretical foundation and technical support for the bidding decision-making of international PPP projects.
Consistent Partial Least Squares Path Modeling via Regularization.
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.
VALUE - A Framework to Validate Downscaling Approaches for Climate Change Studies
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Widmann, Martin; Gutiérrez, José M.; Kotlarski, Sven; Chandler, Richard E.; Hertig, Elke; Wibig, Joanna; Huth, Radan; Wilke, Renate A. I.
2015-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research. VALUE aims to foster collaboration and knowledge exchange between climatologists, impact modellers, statisticians, and stakeholders to establish an interdisciplinary downscaling community. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. Here, we present the key ingredients of this framework. VALUE's main approach to validation is user-focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur: what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Do methods fail in representing regional climate change? How is the overall representation of regional climate, including errors inherited from global climate models? The framework will be the basis for a comprehensive community-open downscaling intercomparison study, but is intended also to provide general guidance for other validation studies.
VALUE: A framework to validate downscaling approaches for climate change studies
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Widmann, Martin; Gutiérrez, José M.; Kotlarski, Sven; Chandler, Richard E.; Hertig, Elke; Wibig, Joanna; Huth, Radan; Wilcke, Renate A. I.
2015-01-01
VALUE is an open European network to validate and compare downscaling methods for climate change research. VALUE aims to foster collaboration and knowledge exchange between climatologists, impact modellers, statisticians, and stakeholders to establish an interdisciplinary downscaling community. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. In this paper, we present the key ingredients of this framework. VALUE's main approach to validation is user- focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur: what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Do methods fail in representing regional climate change? How is the overall representation of regional climate, including errors inherited from global climate models? The framework will be the basis for a comprehensive community-open downscaling intercomparison study, but is intended also to provide general guidance for other validation studies.
Structural kinetic modeling of metabolic networks.
Steuer, Ralf; Gross, Thilo; Selbig, Joachim; Blasius, Bernd
2006-08-08
To develop and investigate detailed mathematical models of metabolic processes is one of the primary challenges in systems biology. However, despite considerable advance in the topological analysis of metabolic networks, kinetic modeling is still often severely hampered by inadequate knowledge of the enzyme-kinetic rate laws and their associated parameter values. Here we propose a method that aims to give a quantitative account of the dynamical capabilities of a metabolic system, without requiring any explicit information about the functional form of the rate equations. Our approach is based on constructing a local linear model at each point in parameter space, such that each element of the model is either directly experimentally accessible or amenable to a straightforward biochemical interpretation. This ensemble of local linear models, encompassing all possible explicit kinetic models, then allows for a statistical exploration of the comprehensive parameter space. The method is exemplified on two paradigmatic metabolic systems: the glycolytic pathway of yeast and a realistic-scale representation of the photosynthetic Calvin cycle.
Comprehensive stroke units: a review of comparative evidence and experience.
Chan, Daniel K Y; Cordato, Dennis; O'Rourke, Fintan; Chan, Daniel L; Pollack, Michael; Middleton, Sandy; Levi, Chris
2013-06-01
Stroke unit care offers significant benefits in survival and dependency when compared to general medical ward. Most stroke units are either acute or rehabilitation, but comprehensive (combined acute and rehabilitation) model (comprehensive stroke unit) is less common. To examine different levels of evidence of comprehensive stroke unit compared to other organized inpatient stroke care and share local experience of comprehensive stroke units. Cochrane Library and Medline (1980 to December 2010) review of English language articles comparing stroke units to alternative forms of stroke care delivery, different types of stroke unit models, and differences in processes of care within different stroke unit models. Different levels of comparative evidence of comprehensive stroke units to other models of stroke units are collected. There are no randomized controlled trials directly comparing comprehensive stroke units to other stroke unit models (either acute or rehabilitation). Comprehensive stroke units are associated with reduced length of stay and greatest reduction in combined death and dependency in a meta-analysis study when compared to other stroke unit models. Comprehensive stroke units also have better length of stay and functional outcome when compared to acute or rehabilitation stroke unit models in a cross-sectional study, and better length of stay in a 'before-and-after' comparative study. Components of stroke unit care that improve outcome are multifactorial and most probably include early mobilization. A comprehensive stroke unit model has been successfully implemented in metropolitan and rural hospital settings. Comprehensive stroke units are associated with reductions in length of stay and combined death and dependency and improved functional outcomes compared to other stroke unit models. A comprehensive stroke unit model is worth considering as the preferred model of stroke unit care in the planning and delivery of metropolitan and rural stroke services. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.
The Effects of Conditioned Reinforcement for Reading on Reading Comprehension for 5th Graders
ERIC Educational Resources Information Center
Cumiskey Moore, Colleen
2017-01-01
In three experiments, I tested the effects of the conditioned reinforcement for reading (R+Reading) on reading comprehension with 5th graders. In Experiment 1, I conducted a series of statistical analyses with data from 18 participants for one year. I administered 4 pre/post measurements for reading repertoires which included: 1) state-wide…
ERIC Educational Resources Information Center
Baggerly, Jennifer; Ferretti, Larissa K.
2008-01-01
What is the impact of natural disasters on students' statewide assessment scores? To answer this question, Florida Comprehensive Assessment Test (FCAT) scores of 55,881 students in grades 4 through 10 were analyzed to determine if there were significant decreases after the 2004 hurricanes. Results reveal that there was statistical but no practical…
ERIC Educational Resources Information Center
Tighe, Elizabeth L.; Schatschneider, Christopher
2016-01-01
The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in adult basic education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological…
ERIC Educational Resources Information Center
Li, Chen-Hong; Chen, Cai-Jun; Wu, Meng-Jie; Kuo, Ya-Chu; Tseng, Yun-Ting; Tsai, Shi-Yi; Shih, Hung-Chun
2017-01-01
We examined the effect of cultural familiarity and question-preview types on the listening comprehension of L2 learners. The results showed that the participants who received the full question-preview format scored higher than those receiving either the answer-option preview or question-stem preview, despite a statistically nonsignificant…
Nodal portraits of quantum billiards: Domains, lines, and statistics
NASA Astrophysics Data System (ADS)
Jain, Sudhir Ranjan; Samajdar, Rhine
2017-10-01
This is a comprehensive review of the nodal domains and lines of quantum billiards, emphasizing a quantitative comparison of theoretical findings to experiments. The nodal statistics are shown to distinguish not only between regular and chaotic classical dynamics but also between different geometric shapes of the billiard system itself. How a random superposition of plane waves can model chaotic eigenfunctions is discussed and the connections of the complex morphology of the nodal lines thereof to percolation theory and Schramm-Loewner evolution are highlighted. Various approaches to counting the nodal domains—using trace formulas, graph theory, and difference equations—are also illustrated with examples. The nodal patterns addressed pertain to waves on vibrating plates and membranes, acoustic and electromagnetic modes, wave functions of a "particle in a box" as well as to percolating clusters, and domains in ferromagnets, thus underlining the diversity and far-reaching implications of the problem.
Interconnect fatigue design for terrestrial photovoltaic modules
NASA Technical Reports Server (NTRS)
Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.
1982-01-01
The results of comprehensive investigation of interconnect fatigue that has led to the definition of useful reliability-design and life-prediction algorithms are presented. Experimental data indicate that the classical strain-cycle (fatigue) curve for the interconnect material is a good model of mean interconnect fatigue performance, but it fails to account for the broad statistical scatter, which is critical to reliability prediction. To fill this shortcoming the classical fatigue curve is combined with experimental cumulative interconnect failure rate data to yield statistical fatigue curves (having failure probability as a parameter) which enable (1) the prediction of cumulative interconnect failures during the design life of an array field, and (2) the unambiguous--ie., quantitative--interpretation of data from field-service qualification (accelerated thermal cycling) tests. Optimal interconnect cost-reliability design algorithms are derived based on minimizing the cost of energy over the design life of the array field.
Interconnect fatigue design for terrestrial photovoltaic modules
NASA Astrophysics Data System (ADS)
Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.
1982-03-01
The results of comprehensive investigation of interconnect fatigue that has led to the definition of useful reliability-design and life-prediction algorithms are presented. Experimental data indicate that the classical strain-cycle (fatigue) curve for the interconnect material is a good model of mean interconnect fatigue performance, but it fails to account for the broad statistical scatter, which is critical to reliability prediction. To fill this shortcoming the classical fatigue curve is combined with experimental cumulative interconnect failure rate data to yield statistical fatigue curves (having failure probability as a parameter) which enable (1) the prediction of cumulative interconnect failures during the design life of an array field, and (2) the unambiguous--ie., quantitative--interpretation of data from field-service qualification (accelerated thermal cycling) tests. Optimal interconnect cost-reliability design algorithms are derived based on minimizing the cost of energy over the design life of the array field.
The beta Burr type X distribution properties with application.
Merovci, Faton; Khaleel, Mundher Abdullah; Ibrahim, Noor Akma; Shitan, Mahendran
2016-01-01
We develop a new continuous distribution called the beta-Burr type X distribution that extends the Burr type X distribution. The properties provide a comprehensive mathematical treatment of this distribution. Further more, various structural properties of the new distribution are derived, that includes moment generating function and the rth moment thus generalizing some results in the literature. We also obtain expressions for the density, moment generating function and rth moment of the order statistics. We consider the maximum likelihood estimation to estimate the parameters. Additionally, the asymptotic confidence intervals for the parameters are derived from the Fisher information matrix. Finally, simulation study is carried at under varying sample size to assess the performance of this model. Illustration the real dataset indicates that this new distribution can serve as a good alternative model to model positive real data in many areas.
NASA Astrophysics Data System (ADS)
Hassanzadeh, Y.; Vidon, P.; Gold, A.; Pradhanang, S. M.; Addy, K.
2017-12-01
Vegetated riparian zones are often considered for use as best management practices to mitigate the impacts of agriculture on water quality. However, riparian zones can also be a source of greenhouse gases and their influence on water quality varies depending on landscape hydrogeomorphic characteristics and climate. Methods used to evaluate riparian zone functions include conceptual models, and spatially explicit and process based models (REMM), but very few attempts have been made to connect riparian zone characteristics with function using easily accessible landscape scale data. Here, we present comprehensive statistical models that can be used to assess riparian zone functions with easily obtainable landscape-scale hydrogeomorphic attributes and climate data. Models were developed from a database spanning 88 years and 36 sites. Statistical methods including principal component analysis and stepwise regression were used to reduced data dimensionality and identify significant predictors. Models were validated using additional data collected from scientific literature. The 8 models developed connect landscape characteristics to nitrogen and phosphorus concentration and removal (1-4), greenhouse gas emissions (5-7), and water table depth (8). Results show the range of influence that various climate and landscape characteristics have on riparian zone functions, and the tradeoffs that exist with regards to nitrogen, phosphorous, and greenhouse gases. These models will help reduce the need for extensive field measurements and help scientists and land managers make more informed decisions regarding the use of riparian zones for water quality management.
NASA Technical Reports Server (NTRS)
Young, Steve; UijtdeHaag, Maarten; Sayre, Jonathon
2003-01-01
Synthetic Vision Systems (SVS) provide pilots with displays of stored geo-spatial data representing terrain, obstacles, and cultural features. As comprehensive validation is impractical, these databases typically have no quantifiable level of integrity. Further, updates to the databases may not be provided as changes occur. These issues limit the certification level and constrain the operational context of SVS for civil aviation. Previous work demonstrated the feasibility of using a realtime monitor to bound the integrity of Digital Elevation Models (DEMs) by using radar altimeter measurements during flight. This paper describes an extension of this concept to include X-band Weather Radar (WxR) measurements. This enables the monitor to detect additional classes of DEM errors and to reduce the exposure time associated with integrity threats. Feature extraction techniques are used along with a statistical assessment of similarity measures between the sensed and stored features that are detected. Recent flight-testing in the area around the Juneau, Alaska Airport (JNU) has resulted in a comprehensive set of sensor data that is being used to assess the feasibility of the proposed monitor technology. Initial results of this assessment are presented.
Psychosocial work factors and long sickness absence in Europe.
Slany, Corinna; Schütte, Stefanie; Chastang, Jean-François; Parent-Thirion, Agnès; Vermeylen, Greet; Niedhammer, Isabelle
2014-01-01
Studies exploring a wide range of psychosocial work factors separately and together in association with long sickness absence are still lacking. The objective of this study was to explore the associations between psychosocial work factors measured following a comprehensive instrument (Copenhagen psychosocial questionnaire, COPSOQ) and long sickness absence (> 7 days/year) in European employees of 34 countries. An additional objective was to study the differences in these associations according to gender and countries. The study population consisted of 16 120 male and 16 588 female employees from the 2010 European working conditions survey. Twenty-five psychosocial work factors were explored. Statistical analysis was performed using multilevel logistic regression models and interaction testing. When studied together in the same model, factors related to job demands (quantitative demands and demands for hiding emotions), possibilities for development, social relationships (role conflicts, quality of leadership, social support, and sense of community), workplace violence (physical violence, bullying, and discrimination), shift work, and job promotion were associated with long sickness absence. Almost no difference was observed according to gender and country. Comprehensive prevention policies oriented to psychosocial work factors may be useful to prevent long sickness absence at European level.
Su, Weixing; Chen, Hanning; Liu, Fang; Lin, Na; Jing, Shikai; Liang, Xiaodan; Liu, Wei
2017-03-01
There are many dynamic optimization problems in the real world, whose convergence and searching ability is cautiously desired, obviously different from static optimization cases. This requires an optimization algorithm adaptively seek the changing optima over dynamic environments, instead of only finding the global optimal solution in the static environment. This paper proposes a novel comprehensive learning artificial bee colony optimizer (CLABC) for optimization in dynamic environments problems, which employs a pool of optimal foraging strategies to balance the exploration and exploitation tradeoff. The main motive of CLABC is to enrich artificial bee foraging behaviors in the ABC model by combining Powell's pattern search method, life-cycle, and crossover-based social learning strategy. The proposed CLABC is a more bee-colony-realistic model that the bee can reproduce and die dynamically throughout the foraging process and population size varies as the algorithm runs. The experiments for evaluating CLABC are conducted on the dynamic moving peak benchmarks. Furthermore, the proposed algorithm is applied to a real-world application of dynamic RFID network optimization. Statistical analysis of all these cases highlights the significant performance improvement due to the beneficial combination and demonstrates the performance superiority of the proposed algorithm.
Psychosocial work factors and long sickness absence in Europe
Slany, Corinna; Schütte, Stefanie; Chastang, Jean-François; Parent-Thirion, Agnès; Vermeylen, Greet; Niedhammer, Isabelle
2014-01-01
Background: Studies exploring a wide range of psychosocial work factors separately and together in association with long sickness absence are still lacking. Objectives: The objective of this study was to explore the associations between psychosocial work factors measured following a comprehensive instrument (Copenhagen psychosocial questionnaire, COPSOQ) and long sickness absence (>7 days/year) in European employees of 34 countries. An additional objective was to study the differences in these associations according to gender and countries. Methods: The study population consisted of 16 120 male and 16 588 female employees from the 2010 European working conditions survey. Twenty-five psychosocial work factors were explored. Statistical analysis was performed using multilevel logistic regression models and interaction testing. Results: When studied together in the same model, factors related to job demands (quantitative demands and demands for hiding emotions), possibilities for development, social relationships (role conflicts, quality of leadership, social support, and sense of community), workplace violence (physical violence, bullying, and discrimination), shift work, and job promotion were associated with long sickness absence. Almost no difference was observed according to gender and country. Conclusions: Comprehensive prevention policies oriented to psychosocial work factors may be useful to prevent long sickness absence at European level. PMID:24176393
Zwingerman, Nora; Medina-Rivera, Alejandra; Kassam, Irfahan; Wilson, Michael D.; Morange, Pierre-Emmanuel; Trégouët, David-Alexandre; Gagnon, France
2017-01-01
Background Thrombin activatable fibrinolysis inhibitor (TAFI), encoded by the Carboxypeptidase B2 gene (CPB2), is an inhibitor of fibrinolysis and plays a role in the pathogenesis of venous thrombosis. Experimental findings support a functional role of genetic variants in CPB2, while epidemiological studies have been unable to confirm associations with risk of venous thrombosis. Sex-specific effects could underlie the observed inconsistent associations between CPB2 genetic variants and venous thrombosis. Methods A comprehensive literature search was conducted for associations between Ala147Thr and Thr325Ile variants with venous thrombosis. Authors were contacted to provide sex-specific genotype counts from their studies. Combined and sex-specific random effects meta-analyses were used to estimate a pooled effect estimate for primary and secondary genetic models. Results A total of 17 studies met the inclusion criteria. A sex-specific meta-analysis applying a dominant model supported a protective effect of Ala147Thr on venous thrombosis in females (OR = 0.81, 95%CI: 0.68,0.97; p = 0.018), but not in males (OR = 1.06, 95%CI:0.96–1.16; p = 0.263). The Thr325Ile did not show a sex-specific effect but showed variation in allele frequencies by geographic region. A subgroup analysis of studies in European countries showed decreased risk, with a recessive model (OR = 0.83, 95%CI:0.71–0.97, p = 0.021) for venous thrombosis. Conclusions A comprehensive literature review, including unpublished data, provided greater statistical power for the analyses and decreased the likelihood of publication bias influencing the results. Sex-specific analyses explained apparent discrepancies across genetic studies of Ala147Thr and venous thrombosis. While, careful selection of genetic models based on population genetics, evolutionary and biological knowledge can increase power by decreasing the need to adjust for testing multiple models. PMID:28552956
Zwingerman, Nora; Medina-Rivera, Alejandra; Kassam, Irfahan; Wilson, Michael D; Morange, Pierre-Emmanuel; Trégouët, David-Alexandre; Gagnon, France
2017-01-01
Thrombin activatable fibrinolysis inhibitor (TAFI), encoded by the Carboxypeptidase B2 gene (CPB2), is an inhibitor of fibrinolysis and plays a role in the pathogenesis of venous thrombosis. Experimental findings support a functional role of genetic variants in CPB2, while epidemiological studies have been unable to confirm associations with risk of venous thrombosis. Sex-specific effects could underlie the observed inconsistent associations between CPB2 genetic variants and venous thrombosis. A comprehensive literature search was conducted for associations between Ala147Thr and Thr325Ile variants with venous thrombosis. Authors were contacted to provide sex-specific genotype counts from their studies. Combined and sex-specific random effects meta-analyses were used to estimate a pooled effect estimate for primary and secondary genetic models. A total of 17 studies met the inclusion criteria. A sex-specific meta-analysis applying a dominant model supported a protective effect of Ala147Thr on venous thrombosis in females (OR = 0.81, 95%CI: 0.68,0.97; p = 0.018), but not in males (OR = 1.06, 95%CI:0.96-1.16; p = 0.263). The Thr325Ile did not show a sex-specific effect but showed variation in allele frequencies by geographic region. A subgroup analysis of studies in European countries showed decreased risk, with a recessive model (OR = 0.83, 95%CI:0.71-0.97, p = 0.021) for venous thrombosis. A comprehensive literature review, including unpublished data, provided greater statistical power for the analyses and decreased the likelihood of publication bias influencing the results. Sex-specific analyses explained apparent discrepancies across genetic studies of Ala147Thr and venous thrombosis. While, careful selection of genetic models based on population genetics, evolutionary and biological knowledge can increase power by decreasing the need to adjust for testing multiple models.
Emberson, Lauren L.; Rubinstein, Dani
2016-01-01
The influence of statistical information on behavior (either through learning or adaptation) is quickly becoming foundational to many domains of cognitive psychology and cognitive neuroscience, from language comprehension to visual development. We investigate a central problem impacting these diverse fields: when encountering input with rich statistical information, are there any constraints on learning? This paper examines learning outcomes when adult learners are given statistical information across multiple levels of abstraction simultaneously: from abstract, semantic categories of everyday objects to individual viewpoints on these objects. After revealing statistical learning of abstract, semantic categories with scrambled individual exemplars (Exp. 1), participants viewed pictures where the categories as well as the individual objects predicted picture order (e.g., bird1—dog1, bird2—dog2). Our findings suggest that participants preferentially encode the relationships between the individual objects, even in the presence of statistical regularities linking semantic categories (Exps. 2 and 3). In a final experiment we investigate whether learners are biased towards learning object-level regularities or simply construct the most detailed model given the data (and therefore best able to predict the specifics of the upcoming stimulus) by investigating whether participants preferentially learn from the statistical regularities linking individual snapshots of objects or the relationship between the objects themselves (e.g., bird_picture1— dog_picture1, bird_picture2—dog_picture2). We find that participants fail to learn the relationships between individual snapshots, suggesting a bias towards object-level statistical regularities as opposed to merely constructing the most complete model of the input. This work moves beyond the previous existence proofs that statistical learning is possible at both very high and very low levels of abstraction (categories vs. individual objects) and suggests that, at least with the current categories and type of learner, there are biases to pick up on statistical regularities between individual objects even when robust statistical information is present at other levels of abstraction. These findings speak directly to emerging theories about how systems supporting statistical learning and prediction operate in our structure-rich environments. Moreover, the theoretical implications of the current work across multiple domains of study is already clear: statistical learning cannot be assumed to be unconstrained even if statistical learning has previously been established at a given level of abstraction when that information is presented in isolation. PMID:27139779
A Comprehensive Study of Retinal Vessel Classification Methods in Fundus Images
Miri, Maliheh; Amini, Zahra; Rabbani, Hossein; Kafieh, Raheleh
2017-01-01
Nowadays, it is obvious that there is a relationship between changes in the retinal vessel structure and diseases such as diabetic, hypertension, stroke, and the other cardiovascular diseases in adults as well as retinopathy of prematurity in infants. Retinal fundus images provide non-invasive visualization of the retinal vessel structure. Applying image processing techniques in the study of digital color fundus photographs and analyzing their vasculature is a reliable approach for early diagnosis of the aforementioned diseases. Reduction in the arteriolar–venular ratio of retina is one of the primary signs of hypertension, diabetic, and cardiovascular diseases which can be calculated by analyzing the fundus images. To achieve a precise measuring of this parameter and meaningful diagnostic results, accurate classification of arteries and veins is necessary. Classification of vessels in fundus images faces with some challenges that make it difficult. In this paper, a comprehensive study of the proposed methods for classification of arteries and veins in fundus images is presented. Considering that these methods are evaluated on different datasets and use different evaluation criteria, it is not possible to conduct a fair comparison of their performance. Therefore, we evaluate the classification methods from modeling perspective. This analysis reveals that most of the proposed approaches have focused on statistics, and geometric models in spatial domain and transform domain models have received less attention. This could suggest the possibility of using transform models, especially data adaptive ones, for modeling of the fundus images in future classification approaches. PMID:28553578
Tulsky, David S.; Jette, Alan; Kisala, Pamela A.; Kalpakjian, Claire; Dijkers, Marcel P.; Whiteneck, Gale; Ni, Pengsheng; Kirshblum, Steven; Charlifue, Susan; Heinemann, Allen W.; Forchheimer, Martin; Slavin, Mary; Houlihan, Bethlyn; Tate, Denise; Dyson-Hudson, Trevor; Fyffe, Denise; Williams, Steve; Zanca, Jeanne
2012-01-01
Objective To develop a comprehensive set of patient reported items to assess multiple aspects of physical functioning relevant to the lives of people with spinal cord injury (SCI) and to evaluate the underlying structure of physical functioning. Design Cross-sectional Setting Inpatient and community Participants Item pools of physical functioning were developed, refined and field tested in a large sample of 855 individuals with traumatic spinal cord injury stratified by diagnosis, severity, and time since injury Interventions None Main Outcome Measure SCI-FI measurement system Results Confirmatory factor analysis (CFA) indicated that a 5-factor model, including basic mobility, ambulation, wheelchair mobility, self care, and fine motor, had the best model fit and was most closely aligned conceptually with feedback received from individuals with SCI and SCI clinicians. When just the items making up basic mobility were tested in CFA, the fit statistics indicate strong support for a unidimensional model. Similar results were demonstrated for each of the other four factors indicating unidimensional models. Conclusions Though unidimensional or 2-factor (mobility and upper extremity) models of physical functioning make up outcomes measures in the general population, the underlying structure of physical function in SCI is more complex. A 5-factor solution allows for comprehensive assessment of key domain areas of physical functioning. These results informed the structure and development of the SCI-FI measurement system of physical functioning. PMID:22609299
Education Statistics Quarterly. Volume 6, Issue 4, 2004. NCES 2006-613
ERIC Educational Resources Information Center
National Center for Education Statistics, 2006
2006-01-01
The "Quarterly" offers a comprehensive overview of work done across all of the National Center for Education Statistics (NCES). Each issue includes short publications and summaries covering all NCES publications and data products released in a given time period as well as notices about training and funding opportunities. In addition,…
American Indians. 1970 Census of Population, Subject Reports.
ERIC Educational Resources Information Center
Department of Commerce, Washington, DC.
The in-depth statistical profile of the American Indian's condition today is the most comprehensive ever done on the subject by the Bureau of the Census (U.S. Department of Commerce, Social and Economic Statistics Administration). Presenting information from the 1970 Census of Population and Housing it includes tribal and reservation data and…
The Empirical Review of Meta-Analysis Published in Korea
ERIC Educational Resources Information Center
Park, Sunyoung; Hong, Sehee
2016-01-01
Meta-analysis is a statistical method that is increasingly utilized to combine and compare the results of previous primary studies. However, because of the lack of comprehensive guidelines for how to use meta-analysis, many meta-analysis studies have failed to consider important aspects, such as statistical programs, power analysis, publication…
Public Library Statistics, 1950. Bulletin, 1953, No. 9
ERIC Educational Resources Information Center
Dunbar, Ralph M.
1954-01-01
The Office of Education has long been interested in the development of public libraries as agencies to further the educational progress of the nation. Beginning with 1870, it has issued at intervals statistical compilations on the status of the various types of libraries. Marking a change in that program, the comprehensive collection covering…
ERIC Educational Resources Information Center
Freng, Scott; Webber, David; Blatter, Jamin; Wing, Ashley; Scott, Walter D.
2011-01-01
Comprehension of statistics and research methods is crucial to understanding psychology as a science (APA, 2007). However, psychology majors sometimes approach methodology courses with derision or anxiety (Onwuegbuzie & Wilson, 2003; Rajecki, Appleby, Williams, Johnson, & Jeschke, 2005); consequently, students may postpone…
Education Statistics Quarterly. Volume 4 Issue 4, 2002.
ERIC Educational Resources Information Center
National Center for Education Statistics, 2002
2002-01-01
This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released in a 3-month period. Each issue also contains a message from the NCES on a timely…
This analysis updates EPA's standard VSL estimate by using a more comprehensive collection of VSL studies that include studies published between 1992 and 2000, as well as applying a more appropriate statistical method. We provide a pooled effect VSL estimate by applying the empi...
Seetha, Anitha; Tsusaka, Takuji W; Munthali, Timalizge W; Musukwa, Maggie; Mwangwela, Agnes; Kalumikiza, Zione; Manani, Tinna; Kachulu, Lizzie; Kumwenda, Nelson; Musoke, Mike; Okori, Patrick
2018-04-01
The present study examined the impacts of training on nutrition, hygiene and food safety designed by the Nutrition Working Group, Child Survival Collaborations and Resources Group (CORE). Adapted from the 21d Positive Deviance/Hearth model, mothers were trained on the subjects of appropriate complementary feeding, water, sanitation and hygiene (WASH) practices, and aflatoxin contamination in food. To assess the impacts on child undernutrition, a randomised controlled trial was implemented on a sample of 179 mothers and their children (<2 years old) in two districts of Malawi, namely Mzimba and Balaka. Settings A 21d intensive learning-by-doing process using the positive deviance approach. Malawian children and mothers. Difference-in-difference panel regression analysis revealed that the impacts of the comprehensive training were positive and statistically significant on the Z-scores for wasting and underweight, where the effects increased constantly over time within the 21d time frame. As for stunting, the coefficients were not statistically significant during the 21d programme, although the level of significance started increasing in 2 weeks, indicating that stunting should also be alleviated in a slightly longer time horizon. The study clearly suggests that comprehensive training immediately guides mothers into improved dietary and hygiene practices, and that improved practices take immediate and progressive effects in ameliorating children's undernutrition.
Chern, Yahn-Bor; Ho, Pei-Shan; Kuo, Li-Chueh; Chen, Jin-Bor
2013-01-01
Peritoneal dialysis (PD)-related peritonitis remains an important complication in PD patients, potentially causing technique failure and influencing patient outcome. To date, no comprehensive study in the Taiwanese PD population has used a time-dependent statistical method to analyze the factors associated with PD-related peritonitis. Our single-center retrospective cohort study, conducted in southern Taiwan between February 1999 and July 2010, used time-dependent statistical methods to analyze the factors associated with PD-related peritonitis. The study recruited 404 PD patients for analysis, 150 of whom experienced at least 1 episode of peritonitis during the follow-up period. The incidence rate of peritonitis was highest during the first 6 months after PD start. A comparison of patients in the two groups (peritonitis vs null-peritonitis) by univariate analysis showed that the peritonitis group included fewer men (p = 0.048) and more patients of older age (≥65 years, p = 0.049). In addition, patients who had never received compulsory education showed a statistically higher incidence of PD-related peritonitis in the univariate analysis (p = 0.04). A proportional hazards model identified education level (less than elementary school vs any higher education level) as having an independent association with PD-related peritonitis [hazard ratio (HR): 1.45; 95% confidence interval (CI): 1.01 to 2.06; p = 0.045). Comorbidities measured using the Charlson comorbidity index (score >2 vs ≤2) showed borderline statistical significance (HR: 1.44; 95% CI: 1.00 to 2.13; p = 0.053). A lower education level is a major risk factor for PD-related peritonitis independent of age, sex, hypoalbuminemia, and comorbidities. Our study emphasizes that a comprehensive PD education program is crucial for PD patients with a lower education level.
NASA Astrophysics Data System (ADS)
Fajber, R. A.; Kushner, P. J.; Laliberte, F. B.
2017-12-01
In the midlatitude atmosphere, baroclinic eddies are able to raise warm, moist air from the surface into the midtroposphere where it condenses and warms the atmosphere through latent heating. This coupling between dynamics and moist thermodynamics motivates using a conserved moist thermodynamic variable, such as the equivalent potential temperature, to study the midlatitude circulation and associated heat transport since it implicitly accounts for latent heating. When the equivalent potential temperature is used to zonally average the circulation, the moist isentropic circulation takes the form of a single cell in each hemisphere. By utilising the statistical transformed Eulerian mean (STEM) circulation we are able to parametrize the moist isentropic circulation in terms of second order dynamic and moist thermodynamic statistics. The functional dependence of the STEM allows us to analytically calculate functional derivatives that reveal the spatially varying sensitivity of the moist isentropic circulation to perturbations in different statistics. Using the STEM functional derivatives as sensitivity kernels we interpret changes in the moist isentropic circulation from two experiments: surface heating in an idealised moist model, and a climate change scenario in a comprehensive atmospheric general circulation model. In both cases we find that the changes in the moist isentropic circulation are well predicted by the functional sensitivities, and that the total heat transport is more sensitive to changes in dynamical processes driving local changes in poleward heat transport than it is to thermodynamic and/or radiative processes driving changes to the distribution of equivalent potential temperature.
voomDDA: discovery of diagnostic biomarkers and classification of RNA-seq data.
Zararsiz, Gokmen; Goksuluk, Dincer; Klaus, Bernd; Korkmaz, Selcuk; Eldem, Vahap; Karabulut, Erdem; Ozturk, Ahmet
2017-01-01
RNA-Seq is a recent and efficient technique that uses the capabilities of next-generation sequencing technology for characterizing and quantifying transcriptomes. One important task using gene-expression data is to identify a small subset of genes that can be used to build diagnostic classifiers particularly for cancer diseases. Microarray based classifiers are not directly applicable to RNA-Seq data due to its discrete nature. Overdispersion is another problem that requires careful modeling of mean and variance relationship of the RNA-Seq data. In this study, we present voomDDA classifiers: variance modeling at the observational level (voom) extensions of the nearest shrunken centroids (NSC) and the diagonal discriminant classifiers. VoomNSC is one of these classifiers and brings voom and NSC approaches together for the purpose of gene-expression based classification. For this purpose, we propose weighted statistics and put these weighted statistics into the NSC algorithm. The VoomNSC is a sparse classifier that models the mean-variance relationship using the voom method and incorporates voom's precision weights into the NSC classifier via weighted statistics. A comprehensive simulation study was designed and four real datasets are used for performance assessment. The overall results indicate that voomNSC performs as the sparsest classifier. It also provides the most accurate results together with power-transformed Poisson linear discriminant analysis, rlog transformed support vector machines and random forests algorithms. In addition to prediction purposes, the voomNSC classifier can be used to identify the potential diagnostic biomarkers for a condition of interest. Through this work, statistical learning methods proposed for microarrays can be reused for RNA-Seq data. An interactive web application is freely available at http://www.biosoft.hacettepe.edu.tr/voomDDA/.
CSRQ Center Report on Elementary School Comprehensive School Reform Models: Educator's Summary
ERIC Educational Resources Information Center
Center for Data-Driven Reform in Education (NJ3), 2008
2008-01-01
Which comprehensive school reform programs have evidence of positive effects on elementary school achievement? To find out, this review summarizes evidence on comprehensive school reform (CSR) models in elementary schools, grades K-6. Comprehensive school reform models are programs used schoolwide to improve student achievement. They typically…
NASA Astrophysics Data System (ADS)
Pearl, Judea
2000-03-01
Written by one of the pre-eminent researchers in the field, this book provides a comprehensive exposition of modern analysis of causation. It shows how causality has grown from a nebulous concept into a mathematical theory with significant applications in the fields of statistics, artificial intelligence, philosophy, cognitive science, and the health and social sciences. Pearl presents a unified account of the probabilistic, manipulative, counterfactual and structural approaches to causation, and devises simple mathematical tools for analyzing the relationships between causal connections, statistical associations, actions and observations. The book will open the way for including causal analysis in the standard curriculum of statistics, artifical intelligence, business, epidemiology, social science and economics. Students in these areas will find natural models, simple identification procedures, and precise mathematical definitions of causal concepts that traditional texts have tended to evade or make unduly complicated. This book will be of interest to professionals and students in a wide variety of fields. Anyone who wishes to elucidate meaningful relationships from data, predict effects of actions and policies, assess explanations of reported events, or form theories of causal understanding and causal speech will find this book stimulating and invaluable.
Tian, Zengshan; Xu, Kunjie; Yu, Xiang
2014-01-01
This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349
Zhou, Mu; Tian, Zengshan; Xu, Kunjie; Yu, Xiang; Wu, Haibo
2014-01-01
This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.
2015-06-18
Engineering Effectiveness Survey. CMU/SEI-2012-SR-009. Carnegie Mellon University. November 2012. Field, Andy. Discovering Statistics Using SPSS , 3rd...enough into the survey to begin answering questions on risk practices. All of the data statistical analysis will be performed using SPSS . Prior to...probabilistically using distributions for likelihood and impact. Statistical methods like Monte Carlo can more comprehensively evaluate the cost and
Evaluation of risk communication in a mammography patient decision aid.
Klein, Krystal A; Watson, Lindsey; Ash, Joan S; Eden, Karen B
2016-07-01
We characterized patients' comprehension, memory, and impressions of risk communication messages in a patient decision aid (PtDA), Mammopad, and clarified perceived importance of numeric risk information in medical decision making. Participants were 75 women in their forties with average risk factors for breast cancer. We used mixed methods, comprising a risk estimation problem administered within a pretest-posttest design, and semi-structured qualitative interviews with a subsample of 21 women. Participants' positive predictive value estimates of screening mammography improved after using Mammopad. Although risk information was only briefly memorable, through content analysis, we identified themes describing why participants value quantitative risk information, and obstacles to understanding. We describe ways the most complicated graphic was incompletely comprehended. Comprehension of risk information following Mammopad use could be improved. Patients valued receiving numeric statistical information, particularly in pictograph format. Obstacles to understanding risk information, including potential for confusion between statistics, should be identified and mitigated in PtDA design. Using simple pictographs accompanied by text, PtDAs may enhance a shared decision-making discussion. PtDA designers and providers should be aware of benefits and limitations of graphical risk presentations. Incorporating comprehension checks could help identify and correct misapprehensions of graphically presented statistics. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Evaluation of risk communication in a mammography patient decision aid
Klein, Krystal A.; Watson, Lindsey; Ash, Joan S.; Eden, Karen B.
2016-01-01
Objectives We characterized patients’ comprehension, memory, and impressions of risk communication messages in a patient decision aid (PtDA), Mammopad, and clarified perceived importance of numeric risk information in medical decision making. Methods Participants were 75 women in their forties with average risk factors for breast cancer. We used mixed methods, comprising a risk estimation problem administered within a pretest–posttest design, and semi-structured qualitative interviews with a subsample of 21 women. Results Participants’ positive predictive value estimates of screening mammography improved after using Mammopad. Although risk information was only briefly memorable, through content analysis, we identified themes describing why participants value quantitative risk information, and obstacles to understanding. We describe ways the most complicated graphic was incompletely comprehended. Conclusions Comprehension of risk information following Mammopad use could be improved. Patients valued receiving numeric statistical information, particularly in pictograph format. Obstacles to understanding risk information, including potential for confusion between statistics, should be identified and mitigated in PtDA design. Practice implications Using simple pictographs accompanied by text, PtDAs may enhance a shared decision-making discussion. PtDA designers and providers should be aware of benefits and limitations of graphical risk presentations. Incorporating comprehension checks could help identify and correct misapprehensions of graphically presented statistics PMID:26965020
McNeill, Marjorie H
2009-01-01
The purpose of this research study was to determine whether the administration of a comprehensive examination before graduation increases the percentage of students passing the Registered Health Information Administrator certification examination. A t-test for independent means yielded a statistically significant difference between the Registered Health Information Administrator certification examination pass rates of health information administration programs that administer a comprehensive examination and programs that do not administer a comprehensive examination. Programs with a high certification examination pass rate do not require a comprehensive examination when compared with those programs with a lower pass rate. It is concluded that health information administration faculty at the local level should perform program self-analysis to improve student progress toward achievement of learning outcomes and entry-level competencies.
Yeari, Menahem; van den Broek, Paul
2016-09-01
It is a well-accepted view that the prior semantic (general) knowledge that readers possess plays a central role in reading comprehension. Nevertheless, computational models of reading comprehension have not integrated the simulation of semantic knowledge and online comprehension processes under a unified mathematical algorithm. The present article introduces a computational model that integrates the landscape model of comprehension processes with latent semantic analysis representation of semantic knowledge. In three sets of simulations of previous behavioral findings, the integrated model successfully simulated the activation and attenuation of predictive and bridging inferences during reading, as well as centrality estimations and recall of textual information after reading. Analyses of the computational results revealed new theoretical insights regarding the underlying mechanisms of the various comprehension phenomena.
Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P
2011-01-01
To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, R. Derek; Gunther, Jacob H.; Moon, Todd K.
In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less
NASA Astrophysics Data System (ADS)
Borri, Claudia; Paggi, Marco
2015-02-01
The random process theory (RPT) has been widely applied to predict the joint probability distribution functions (PDFs) of asperity heights and curvatures of rough surfaces. A check of the predictions of RPT against the actual statistics of numerically generated random fractal surfaces and of real rough surfaces has been only partially undertaken. The present experimental and numerical study provides a deep critical comparison on this matter, providing some insight into the capabilities and limitations in applying RPT and fractal modeling to antireflective and hydrophobic rough surfaces, two important types of textured surfaces. A multi-resolution experimental campaign using a confocal profilometer with different lenses is carried out and a comprehensive software for the statistical description of rough surfaces is developed. It is found that the topology of the analyzed textured surfaces cannot be fully described according to RPT and fractal modeling. The following complexities emerge: (i) the presence of cut-offs or bi-fractality in the power-law power-spectral density (PSD) functions; (ii) a more pronounced shift of the PSD by changing resolution as compared to what was expected from fractal modeling; (iii) inaccuracy of the RPT in describing the joint PDFs of asperity heights and curvatures of textured surfaces; (iv) lack of resolution-invariance of joint PDFs of textured surfaces in case of special surface treatments, not accounted for by fractal modeling.
West, R. Derek; Gunther, Jacob H.; Moon, Todd K.
2016-12-01
In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less
Simulation on a car interior aerodynamic noise control based on statistical energy analysis
NASA Astrophysics Data System (ADS)
Chen, Xin; Wang, Dengfeng; Ma, Zhengdong
2012-09-01
How to simulate interior aerodynamic noise accurately is an important question of a car interior noise reduction. The unsteady aerodynamic pressure on body surfaces is proved to be the key effect factor of car interior aerodynamic noise control in high frequency on high speed. In this paper, a detail statistical energy analysis (SEA) model is built. And the vibra-acoustic power inputs are loaded on the model for the valid result of car interior noise analysis. The model is the solid foundation for further optimization on car interior noise control. After the most sensitive subsystems for the power contribution to car interior noise are pointed by SEA comprehensive analysis, the sound pressure level of car interior aerodynamic noise can be reduced by improving their sound and damping characteristics. The further vehicle testing results show that it is available to improve the interior acoustic performance by using detailed SEA model, which comprised by more than 80 subsystems, with the unsteady aerodynamic pressure calculation on body surfaces and the materials improvement of sound/damping properties. It is able to acquire more than 2 dB reduction on the central frequency in the spectrum over 800 Hz. The proposed optimization method can be looked as a reference of car interior aerodynamic noise control by the detail SEA model integrated unsteady computational fluid dynamics (CFD) and sensitivity analysis of acoustic contribution.
Factors associated with comprehensive dental care following an initial emergency dental visit.
Johnson, Jeffrey T; Turner, Erwin G; Novak, Karen F; Kaplan, Alan L
2005-01-01
The purpose of this study was to characterize the patient population utilization of a dental home as grouped by: (1) age; (2) sex; and (3) payment method. A retrospective chart review of 1,020 patients, who initially presented for an emergency visit, was performed. From the original data pool, 2 groups were delineated: (1) those patients who returned for comprehensive dental care; and (2) those who did not return for comprehensive dental care. Patients with private dental insurance or Medicaid dental benefits were statistically more likely to return for comprehensive oral health care than those with no form of dental insurance. Younger patients (< or =3 years of age) were least likely to return for comprehensive dental care. Socioeconomic factors play a crucial role in care-seeking behaviors. These obstacles are often a barrier to preventive and comprehensive oral health care.
Vanatta, Jason M; Dean, Amanda G; Hathaway, Donna K; Nair, Satheesh; Modanlou, Kian A; Campos, Luis; Nezakatgoo, Nosratollah; Satapathy, Sanjaya K; Eason, James D
2013-04-01
Organ donation after cardiac death remains an available resource to meet the demand for transplant. However, concern persists that outcomes associated with donation after cardiac death liver allografts are not equivalent to those obtained with organ donation after brain death. The aim of this matched case control study was to determine if outcomes of liver transplants with donation after cardiac death donors is equivalent to outcomes with donation after brain death donors by controlling for careful donor and recipient selection, surgical technique, and preservation solution. A retrospective, matched case control study of adult liver transplant recipients at the University of Tennessee/Methodist University Hospital Transplant Institute, Memphis, Tennessee was performed. Thirty-eight donation after cardiac death recipients were matched 1:2, with 76 donation after brain death recipients by recipient age, recipient laboratory Model for End Stage Liver Disease score, and donor age to form the 2 groups. A comprehensive approach that controlled for careful donor and recipient matching, surgical technique, and preservation solution was used to minimize warm ischemia time, cold ischemia time, and ischemia-reperfusion injury. Patient and graft survival rates were similar in both groups at 1 and 3 years (P = .444 and P = .295). There was no statistically significant difference in primary nonfunction, vascular complications, or biliary complications. In particular, there was no statistically significant difference in ischemic-type diffuse intrahepatic strictures (P = .107). These findings provide further evidence that excellent patient and graft survival rates expected with liver transplants using organ donation after brain death donors can be achieved with organ donation after cardiac death donors without statistically higher rates of morbidity or mortality when a comprehensive approach that controls for careful donor and recipient matching, surgical technique, and preservation solution is used.
Wen, Yi Feng; Wong, Hai Ming; Lin, Ruitao; Yin, Guosheng; McGrath, Colman
2015-01-01
Background Numerous facial photogrammetric studies have been published around the world. We aimed to critically review these studies so as to establish population norms for various angular and linear facial measurements; and to determine inter-ethnic/racial facial variations. Methods and Findings A comprehensive and systematic search of PubMed, ISI Web of Science, Embase, and Scopus was conducted to identify facial photogrammetric studies published before December, 2014. Subjects of eligible studies were either Africans, Asians or Caucasians. A Bayesian hierarchical random effects model was developed to estimate posterior means and 95% credible intervals (CrI) for each measurement by ethnicity/race. Linear contrasts were constructed to explore inter-ethnic/racial facial variations. We identified 38 eligible studies reporting 11 angular and 18 linear facial measurements. Risk of bias of the studies ranged from 0.06 to 0.66. At the significance level of 0.05, African males were found to have smaller nasofrontal angle (posterior mean difference: 8.1°, 95% CrI: 2.2°–13.5°) compared to Caucasian males and larger nasofacial angle (7.4°, 0.1°–13.2°) compared to Asian males. Nasolabial angle was more obtuse in Caucasian females than in African (17.4°, 0.2°–35.3°) and Asian (9.1°, 0.4°–17.3°) females. Additional inter-ethnic/racial variations were revealed when the level of statistical significance was set at 0.10. Conclusions A comprehensive database for angular and linear facial measurements was established from existing studies using the statistical model and inter-ethnic/racial variations of facial features were observed. The results have implications for clinical practice and highlight the need and value for high quality photogrammetric studies. PMID:26247212
Wen, Yi Feng; Wong, Hai Ming; Lin, Ruitao; Yin, Guosheng; McGrath, Colman
2015-01-01
Numerous facial photogrammetric studies have been published around the world. We aimed to critically review these studies so as to establish population norms for various angular and linear facial measurements; and to determine inter-ethnic/racial facial variations. A comprehensive and systematic search of PubMed, ISI Web of Science, Embase, and Scopus was conducted to identify facial photogrammetric studies published before December, 2014. Subjects of eligible studies were either Africans, Asians or Caucasians. A Bayesian hierarchical random effects model was developed to estimate posterior means and 95% credible intervals (CrI) for each measurement by ethnicity/race. Linear contrasts were constructed to explore inter-ethnic/racial facial variations. We identified 38 eligible studies reporting 11 angular and 18 linear facial measurements. Risk of bias of the studies ranged from 0.06 to 0.66. At the significance level of 0.05, African males were found to have smaller nasofrontal angle (posterior mean difference: 8.1°, 95% CrI: 2.2°-13.5°) compared to Caucasian males and larger nasofacial angle (7.4°, 0.1°-13.2°) compared to Asian males. Nasolabial angle was more obtuse in Caucasian females than in African (17.4°, 0.2°-35.3°) and Asian (9.1°, 0.4°-17.3°) females. Additional inter-ethnic/racial variations were revealed when the level of statistical significance was set at 0.10. A comprehensive database for angular and linear facial measurements was established from existing studies using the statistical model and inter-ethnic/racial variations of facial features were observed. The results have implications for clinical practice and highlight the need and value for high quality photogrammetric studies.
Zhang, Peng; Tian, Jing; Jing, Li; Wang, Quan; Tian, Jinhui; Lun, Li
2016-10-01
Available evidence showed inconsistent results between laparoscopic Nissen's fundoplication (LNF) and open Nissen's fundoplication (ONF) for children with gastro-oesophageal reflux disease (GERD), so this study aimed to evaluate the efficacy and safety between LNF and ONF. Systematic, comprehensive literature searches were conducted to include randomized controlled trials (RCTs) that compared LNF and ONF for GERD. Two reviewers independently selected studies, abstracted data and assessed the methodological quality and evidence level. Data was analyzed by Review Manager Version 5.0. Risk ratio (RR) was used for dichotomous outcomes, and mean difference (MD) was used for continuous scales. Heterogeneity was estimated with the I 2 statistic, fixed-effect model was used if I 2 <50%, and otherwise random-effects model was used. Three RCTs (171 children) were included. There was not a statistical difference in mortality (RR 1.12, 95%CI 0.50 2.48), or postoperative complications (RR 0.87, 95%CI 0.61 1.25), readmission (RR 1.53, 95%CI 0.67 3.51), or hospital stay (MD 0.85, 95%CI -0.06 1.75) between LNF and ONF. But LNF was associated with more incidence of recurrence (RR 3.32, 95%CI 1.40 7.84), longer surgery duration (MD 76.33, 95%CI 69.37 83.28), but fewer retching (RR 0.11, 95%CI 0.02 0.58) than ONF. LNF might be as effective and safe as ONF in the short and long term, but both were associated with high risk of recurrence and mortality, especially for those children with neurological impairment, before the age of 18 months and female gender. This required a comprehensive evaluation of children before surgery. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
A fuzzy logic-based model for noise control at industrial workplaces.
Aluclu, I; Dalgic, A; Toprak, Z F
2008-05-01
Ergonomics is a broad science encompassing the wide variety of working conditions that can affect worker comfort and health, including factors such as lighting, noise, temperature, vibration, workstation design, tool design, machine design, etc. This paper describes noise-human response and a fuzzy logic model developed by comprehensive field studies on noise measurements (including atmospheric parameters) and control measures. The model has two subsystems constructed on noise reduction quantity in dB. The first subsystem of the fuzzy model depending on 549 linguistic rules comprises acoustical features of all materials used in any workplace. Totally 984 patterns were used, 503 patterns for model development and the rest 481 patterns for testing the model. The second subsystem deals with atmospheric parameter interactions with noise and has 52 linguistic rules. Similarly, 94 field patterns were obtained; 68 patterns were used for training stage of the model and the rest 26 patterns for testing the model. These rules were determined by taking into consideration formal standards, experiences of specialists and the measurements patterns. The results of the model were compared with various statistics (correlation coefficients, max-min, standard deviation, average and coefficient of skewness) and error modes (root mean square error and relative error). The correlation coefficients were significantly high, error modes were quite low and the other statistics were very close to the data. This statement indicates the validity of the model. Therefore, the model can be used for noise control in any workplace and helpful to the designer in planning stage of a workplace.
The association between education and induced abortion for three cohorts of adults in Finland
Väisänen, Heini
2015-01-01
This paper explores whether the likelihood of abortion by education changed over time in Finland, where comprehensive family planning services and sexuality education have been available since the early 1970s. This subject has not previously been studied longitudinally with comprehensive and reliable data. A unique longitudinal set of register data of more than 250,000 women aged 20–49 born in 1955–59, 1965–69, and 1975–79 was analysed, using descriptive statistics, concentration curves, and discrete-time event-history models. Women with basic education had a higher likelihood of abortion than others and the association grew stronger for later cohorts. Selection into education may explain this phenomenon: although it was fairly common to have only basic education in the 1955–59 cohort, it became increasingly unusual over time. Thus, even though family planning services were easily available, socio-economic differences in the likelihood of abortion remained. PMID:26449684
Chheang, Dany; Connolly, Eric J
2017-09-01
The collective view of Asian Americans as model minorities is evident with the extensive amount of statistical data showing support for the academic and socioeconomic success of Asian Americans in the United States. This perception, however, often presents an inaccurate portrayal of Asian Americans, in general, as it overlooks many of the difficulties and hardships experienced by Asian American ethnic groups such as Southeast Asians. Within this group, Cambodian Americans are at the highest risk for experiencing socioeconomic hardships, behavioral health problems, substance use disorders, and contact with the criminal justice system, with deportation also being a prevailing issue. Unfortunately, research in this area is scant and contemporary research on Cambodian Americans has several limitations. To begin to address this issue, the present article merges information from existing research on this population from a sociohistorical, criminological, and theoretical standpoint to call for more comprehensive research on Cambodian Americans.
ERIC Educational Resources Information Center
Falcon, Evelyn
2017-01-01
The purpose of this study was to examine if there is any relationship on reading comprehension when background classical music is played in the setting of a 7th and 8th grade classroom. This study also examined if there was a statistically significant difference in test anxiety when listening to classical music while completing a test. Reading…
Adaptation in Coding by Large Populations of Neurons in the Retina
NASA Astrophysics Data System (ADS)
Ioffe, Mark L.
A comprehensive theory of neural computation requires an understanding of the statistical properties of the neural population code. The focus of this work is the experimental study and theoretical analysis of the statistical properties of neural activity in the tiger salamander retina. This is an accessible yet complex system, for which we control the visual input and record from a substantial portion--greater than a half--of the ganglion cell population generating the spiking output. Our experiments probe adaptation of the retina to visual statistics: a central feature of sensory systems which have to adjust their limited dynamic range to a far larger space of possible inputs. In Chapter 1 we place our work in context with a brief overview of the relevant background. In Chapter 2 we describe the experimental methodology of recording from 100+ ganglion cells in the tiger salamander retina. In Chapter 3 we first present the measurements of adaptation of individual cells to changes in stimulation statistics and then investigate whether pairwise correlations in fluctuations of ganglion cell activity change across different stimulation conditions. We then transition to a study of the population-level probability distribution of the retinal response captured with maximum-entropy models. Convergence of the model inference is presented in Chapter 4. In Chapter 5 we first test the empirical presence of a phase transition in such models fitting the retinal response to different experimental conditions, and then proceed to develop other characterizations which are sensitive to complexity in the interaction matrix. This includes an analysis of the dynamics of sampling at finite temperature, which demonstrates a range of subtle attractor-like properties in the energy landscape. These are largely conserved when ambient illumination is varied 1000-fold, a result not necessarily apparent from the measured low-order statistics of the distribution. Our results form a consistent picture which is discussed at the end of Chapter 5. We conclude with a few future directions related to this thesis.
Accounting for disease modifying therapy in models of clinical progression in multiple sclerosis.
Healy, Brian C; Engler, David; Gholipour, Taha; Weiner, Howard; Bakshi, Rohit; Chitnis, Tanuja
2011-04-15
Identifying predictors of clinical progression in patients with relapsing-remitting multiple sclerosis (RRMS) is complicated in the era of disease modifying therapy (DMT) because patients follow many different DMT regimens. To investigate predictors of progression in a treated RRMS sample, a cohort of RRMS patients was prospectively followed in the Comprehensive Longitudinal Investigation of Multiple Sclerosis at the Brigham and Women's Hospital (CLIMB). Enrollment criteria were exposure to either interferon-β (IFN-β, n=164) or glatiramer acetate (GA, n=114) for at least 6 months prior to study entry. Baseline demographic and clinical features were used as candidate predictors of longitudinal clinical change on the Expanded Disability Status Scale (EDSS). We compared three approaches to account for DMT effects in statistical modeling. In all approaches, we analyzed all patients together and stratified based on baseline DMT. Model 1 used all available longitudinal EDSS scores, even those after on-study DMT changes. Model 2 used only clinical observations prior to changing DMT. Model 3 used causal statistical models to identify predictors of clinical change. When all patients were considered using Model 1, patients with a motor symptom as the first relapse had significantly larger change in EDSS scores during follow-up (p=0.04); none of the other clinical or demographic variables significantly predicted change. In Models 2 and 3, results were generally unchanged. DMT modeling choice had a modest impact on the variables classified as predictors of EDSS score change. Importantly, however, interpretation of these predictors is dependent upon modeling choice. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kreyscher, Martin; Harder, Markus; Lemke, Peter; Flato, Gregory M.
2000-05-01
A hierarchy of sea ice rheologies is evaluated on the basis of a comprehensive set of observational data. The investigations are part of the Sea Ice Model Intercomparison Project (SIMIP). Four different sea ice rheology schemes are compared: a viscous-plastic rheology, a cavitating-fluid model, a compressible Newtonian fluid, and a simple free drift approach with velocity correction. The same grid, land boundaries, and forcing fields are applied to all models. As verification data, there are (1) ice thickness data from upward looking sonars (ULS), (2) ice concentration data from the passive microwave radiometers SMMR and SSM/I, (3) daily buoy drift data obtained by the International Arctic Buoy Program (IABP), and (4) satellite-derived ice drift fields based on the 85 GHz channel of SSM/I. All models are optimized individually with respect to mean drift speed and daily drift speed statistics. The impact of ice strength on the ice cover is best revealed by the spatial pattern of ice thickness, ice drift on different timescales, daily drift speed statistics, and the drift velocities in Fram Strait. Overall, the viscous-plastic rheology yields the most realistic simulation. In contrast, the results of the very simple free-drift model with velocity correction clearly show large errors in simulated ice drift as well as in ice thicknesses and ice export through Fram Strait compared to observation. The compressible Newtonian fluid cannot prevent excessive ice thickness buildup in the central Arctic and overestimates the internal forces in Fram Strait. Because of the lack of shear strength, the cavitating-fluid model shows marked differences to the statistics of observed ice drift and the observed spatial pattern of ice thickness. Comparison of required computer resources demonstrates that the additional cost for the viscous-plastic sea ice rheology is minor compared with the atmospheric and oceanic model components in global climate simulations.
Kumar, Amit; Karmarkar, Amol; Downer, Brian; Vashist, Amit; Adhikari, Deepak; Al Snih, Soham; Ottenbacher, Kenneth
2017-11-01
To compare the performances of 3 comorbidity indices, the Charlson Comorbidity Index, the Elixhauser Comorbidity Index, and the Centers for Medicare & Medicaid Services (CMS) risk adjustment model, Hierarchical Condition Category (HCC), in predicting post-acute discharge settings and hospital readmission for patients after joint replacement. A retrospective study of Medicare beneficiaries with total knee replacement (TKR) or total hip replacement (THR) discharged from hospitals in 2009-2011 (n = 607,349) was performed. Study outcomes were post-acute discharge setting and unplanned 30-, 60-, and 90-day hospital readmissions. Logistic regression models were built to compare the performance of the 3 comorbidity indices using C statistics. The base model included patient demographics and hospital use. Subsequent models included 1 of the 3 comorbidity indices. Additional multivariable logistic regression models were built to identify individual comorbid conditions associated with high risk of hospital readmissions. The 30-, 60-, and 90-day unplanned hospital readmission rates were 5.3%, 7.2%, and 8.5%, respectively. Patients were most frequently discharged to home health (46.3%), followed by skilled nursing facility (40.9%) and inpatient rehabilitation facility (12.7%). The C statistics for the base model in predicting post-acute discharge setting and 30-, 60-, and 90-day readmission in TKR and THR were between 0.63 and 0.67. Adding the Charlson Comorbidity Index, the Elixhauser Comorbidity Index, or HCC increased the C statistic minimally from the base model for predicting both discharge settings and hospital readmission. The health conditions most frequently associated with hospital readmission were diabetes mellitus, pulmonary disease, arrhythmias, and heart disease. The comorbidity indices and CMS-HCC demonstrated weak discriminatory ability to predict post-acute discharge settings and hospital readmission following joint replacement. © 2017, American College of Rheumatology.
ERIC Educational Resources Information Center
Texas Coll. and Univ. System, Austin. Coordinating Board.
Comprehensive statistical data on Texas higher education is presented. Data and formulas relating to student enrollments and faculty headcounts, program development and productivity, faculty salaries and teaching loads, campus development, funding, and the state student load program are included. Student headcount enrollment data are presented by…
ERIC Educational Resources Information Center
Ashworth, Kenneth H.
This supplement to the 1978 Annual Report of the Coordinating Board, Texas College and University System, contains comprehensive statistical data on higher education in Texas. The supplement provides facts, figures, and formulas relating to student enrollments and faculty headcounts, program development and productivity, faculty salaries and…
Replicate This! Creating Individual-Level Data from Summary Statistics Using R
ERIC Educational Resources Information Center
Morse, Brendan J.
2013-01-01
Incorporating realistic data and research examples into quantitative (e.g., statistics and research methods) courses has been widely recommended for enhancing student engagement and comprehension. One way to achieve these ends is to use a data generator to emulate the data in published research articles. "MorseGen" is a free data generator that…
African Americans' Participation in a Comprehensive Intervention College Prep Program
ERIC Educational Resources Information Center
Sianjina, Rayton R.; Phillips, Richard
2014-01-01
The National Center for Educational Statistics, in conjunction with the U.S. Department of Education, compiles statistical data for U.S. schools. As charts indicate, in 2001, it reported that nationwide, 76% of high-income graduates immediately enroll in colleges or trade schools. However, only 49% of Hispanic and 59% of African Americans enroll…
Transparency in State Debt Disclosure. Working Papers. No. 17-10
ERIC Educational Resources Information Center
Zhao, Bo; Wang, Wen
2017-01-01
We develop a new measure of relative debt transparency by comparing the amount of state debt reported in the annual Census survey and the amount reported in the statistical section of the state Comprehensive Annual Financial Report (CAFR). GASB 44 requires states to start reporting their total debt in the CAFR statistical section in FY 2006.…
Quinn, Jamie M.; Wagner, Richard K.; Petscher, Yaacov; Lopez, Danielle
2014-01-01
The present study followed a sample of first grade students (N = 316, mean age = 7.05 at first test) through fourth grade to evaluate dynamic developmental relations between vocabulary knowledge and reading comprehension. Using latent change score modeling, competing models were fit to the repeated measurements of vocabulary knowledge and reading comprehension to test for the presence of leading and lagging influences. Univariate models indicated growth in vocabulary knowledge and reading comprehension was determined by two parts: constant yearly change and change proportional to the previous level of the variable. Bivariate models indicated previous levels of vocabulary knowledge acted as leading indicators of reading comprehension growth, but the reverse relation was not found. Implications for theories of developmental relations between vocabulary and reading comprehension are discussed. PMID:25201552
Statistical analysis of fNIRS data: a comprehensive review.
Tak, Sungho; Ye, Jong Chul
2014-01-15
Functional near-infrared spectroscopy (fNIRS) is a non-invasive method to measure brain activities using the changes of optical absorption in the brain through the intact skull. fNIRS has many advantages over other neuroimaging modalities such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), or magnetoencephalography (MEG), since it can directly measure blood oxygenation level changes related to neural activation with high temporal resolution. However, fNIRS signals are highly corrupted by measurement noises and physiology-based systemic interference. Careful statistical analyses are therefore required to extract neuronal activity-related signals from fNIRS data. In this paper, we provide an extensive review of historical developments of statistical analyses of fNIRS signal, which include motion artifact correction, short source-detector separation correction, principal component analysis (PCA)/independent component analysis (ICA), false discovery rate (FDR), serially-correlated errors, as well as inference techniques such as the standard t-test, F-test, analysis of variance (ANOVA), and statistical parameter mapping (SPM) framework. In addition, to provide a unified view of various existing inference techniques, we explain a linear mixed effect model with restricted maximum likelihood (ReML) variance estimation, and show that most of the existing inference methods for fNIRS analysis can be derived as special cases. Some of the open issues in statistical analysis are also described. Copyright © 2013 Elsevier Inc. All rights reserved.
Becker, Bronwyn E.; Luthar, Suniya S.
2012-01-01
Despite concentrated efforts at improving inferior academic outcomes among disadvantaged students, a substantial achievement gap between the test scores of these students and others remains (Jencks & Phillips, 1998; National Center for Education Statistics, 2000a, 2000b; Valencia & Suzuki, 2000). Existing research used ecological models to document social–emotional factors at multiple levels of influence that undermine academic performance. This article integrates ideas from various perspectives in a comprehensive and interdisciplinary model that will inform policy makers, administrators, and schools about the social–emotional factors that act as both risk and protective factors for disadvantaged students’ learning and opportunities for academic success. Four critical social–emotional components that influence achievement performance (academic and school attachment, teacher support, peer values, and mental health) are reviewed. PMID:23255834
Becker, Bronwyn E; Luthar, Suniya S
2002-01-01
Despite concentrated efforts at improving inferior academic outcomes among disadvantaged students, a substantial achievement gap between the test scores of these students and others remains (Jencks & Phillips, 1998; National Center for Education Statistics, 2000a, 2000b; Valencia & Suzuki, 2000). Existing research used ecological models to document social-emotional factors at multiple levels of influence that undermine academic performance. This article integrates ideas from various perspectives in a comprehensive and interdisciplinary model that will inform policy makers, administrators, and schools about the social-emotional factors that act as both risk and protective factors for disadvantaged students' learning and opportunities for academic success. Four critical social-emotional components that influence achievement performance (academic and school attachment, teacher support, peer values, and mental health) are reviewed.
Lee, Kyubum; Kim, Byounggun; Jeon, Minji; Kim, Jihye; Tan, Aik Choon
2018-01-01
Background With the development of artificial intelligence (AI) technology centered on deep-learning, the computer has evolved to a point where it can read a given text and answer a question based on the context of the text. Such a specific task is known as the task of machine comprehension. Existing machine comprehension tasks mostly use datasets of general texts, such as news articles or elementary school-level storybooks. However, no attempt has been made to determine whether an up-to-date deep learning-based machine comprehension model can also process scientific literature containing expert-level knowledge, especially in the biomedical domain. Objective This study aims to investigate whether a machine comprehension model can process biomedical articles as well as general texts. Since there is no dataset for the biomedical literature comprehension task, our work includes generating a large-scale question answering dataset using PubMed and manually evaluating the generated dataset. Methods We present an attention-based deep neural model tailored to the biomedical domain. To further enhance the performance of our model, we used a pretrained word vector and biomedical entity type embedding. We also developed an ensemble method of combining the results of several independent models to reduce the variance of the answers from the models. Results The experimental results showed that our proposed deep neural network model outperformed the baseline model by more than 7% on the new dataset. We also evaluated human performance on the new dataset. The human evaluation result showed that our deep neural model outperformed humans in comprehension by 22% on average. Conclusions In this work, we introduced a new task of machine comprehension in the biomedical domain using a deep neural model. Since there was no large-scale dataset for training deep neural models in the biomedical domain, we created the new cloze-style datasets Biomedical Knowledge Comprehension Title (BMKC_T) and Biomedical Knowledge Comprehension Last Sentence (BMKC_LS) (together referred to as BioMedical Knowledge Comprehension) using the PubMed corpus. The experimental results showed that the performance of our model is much higher than that of humans. We observed that our model performed consistently better regardless of the degree of difficulty of a text, whereas humans have difficulty when performing biomedical literature comprehension tasks that require expert level knowledge. PMID:29305341
Hayat, Matthew J; Schmiege, Sarah J; Cook, Paul F
2014-04-01
Statistics knowledge is essential for understanding the nursing and health care literature, as well as for applying rigorous science in nursing research. Statistical consultants providing services to faculty and students in an academic nursing program have the opportunity to identify gaps and challenges in statistics education for nursing students. This information may be useful to curriculum committees and statistics educators. This article aims to provide perspective on statistics education stemming from the experiences of three experienced statistics educators who regularly collaborate and consult with nurse investigators. The authors share their knowledge and express their views about data management, data screening and manipulation, statistical software, types of scientific investigation, and advanced statistical topics not covered in the usual coursework. The suggestions provided promote a call for data to study these topics. Relevant data about statistics education can assist educators in developing comprehensive statistics coursework for nursing students. Copyright 2014, SLACK Incorporated.
Evaluating comprehensiveness in children's healthcare.
Diniz, Suênia Gonçalves de Medeiros; Damasceno, Simone Soares; Coutinho, Simone Elizabeth Duarte; Toso, Beatriz Rosana Gonçalves de Oliveira; Collet, Neusa
2016-12-15
To evaluate the presence and extent of comprehensiveness in children's healthcare in the context of the Family Health Strategy. Evaluative, quantitative, cross-sectional study conducted with 344 family members of children at the Family Health Units of João Pessoa, PB, Brazil. Data were collected using the PCATool Brazil - child version and analysed according to descriptive and exploratory statistics. The attribute of comprehensiveness did not obtain satisfactory scores in the two evaluated dimensions, namely "available services" and "provided services". The low scores reveal that the attribute comprehensiveness is not employed as expected in a primary care unit and points to the issues that must be altered. It was concluded that the services should be restructured to ensure cross-sector performance in the provision of child care. It is also important to improve the relations between professionals and users to promote comprehensive and effective care.
Cooperativity and modularity in protein folding
Sasai, Masaki; Chikenji, George; Terada, Tomoki P.
2016-01-01
A simple statistical mechanical model proposed by Wako and Saitô has explained the aspects of protein folding surprisingly well. This model was systematically applied to multiple proteins by Muñoz and Eaton and has since been referred to as the Wako-Saitô-Muñoz-Eaton (WSME) model. The success of the WSME model in explaining the folding of many proteins has verified the hypothesis that the folding is dominated by native interactions, which makes the energy landscape globally biased toward native conformation. Using the WSME and other related models, Saitô emphasized the importance of the hierarchical pathway in protein folding; folding starts with the creation of contiguous segments having a native-like configuration and proceeds as growth and coalescence of these segments. The Φ-values calculated for barnase with the WSME model suggested that segments contributing to the folding nucleus are similar to the structural modules defined by the pattern of native atomic contacts. The WSME model was extended to explain folding of multi-domain proteins having a complex topology, which opened the way to comprehensively understanding the folding process of multi-domain proteins. The WSME model was also extended to describe allosteric transitions, indicating that the allosteric structural movement does not occur as a deterministic sequential change between two conformations but as a stochastic diffusive motion over the dynamically changing energy landscape. Statistical mechanical viewpoint on folding, as highlighted by the WSME model, has been renovated in the context of modern methods and ideas, and will continue to provide insights on equilibrium and dynamical features of proteins. PMID:28409080
Comparison between Two Methods for agricultural drought disaster risk in southwestern China
NASA Astrophysics Data System (ADS)
han, lanying; zhang, qiang
2016-04-01
The drought is a natural disaster, which lead huge loss to agricultural yield in the world. The drought risk has become increasingly prominent because of the climatic warming during the past century, and which is also one of the main meteorological disasters and serious problem in southwestern China, where drought risk exceeds the national average. Climate change is likely to exacerbate the problem, thereby endangering Chinaʹs food security. In this paper, drought disaster in the southwestern China (where there are serious drought risk and the comprehensive loss accounted for 3.9% of national drought area) were selected to show the drought change under climate change, and two methods were used to assess the drought disaster risk, drought risk assessment model and comprehensive drought risk index. Firstly, we used the analytic hierarchy process and meteorological, geographic, soil, and remote-sensing data to develop a drought risk assessment model (defined using a comprehensive drought disaster risk index, R) based on the drought hazard, environmental vulnerability, sensitivity and exposure of the values at risk, and capacity to prevent or mitigate the problem. Second, we built the comprehensive drought risk index (defined using a comprehensive drought disaster loss, L) based on statistical drought disaster data, including crop yields, drought-induced areas, drought-occurred areas, no harvest areas caused by drought and planting areas. Using the model, we assessed the drought risk. The results showed that spatial distribution of two drought disaster risks were coherent, and revealed complex zonality in southwestern China. The results also showed the drought risk is becoming more and more serious and frequent in the country under the global climatic warming background. The eastern part of the study area had an extremely high risk, and risk was generally greater in the north than in the south, and increased from southwest to northeast. The drought disaster risk or loss was highest in Sichuan Province and Chongqing Municipality. It was lowest in Yunnan province. The comprehensive drought disaster loss were uptrend in nearly 60 years, and the trend of drought occurrence in nearly 60 years was overall upward in every province of Xinan region. Drought risk of all provinces has certain relationship with the regional climate change, such as temperature and precipitation, soil moisture and vegetation coverage. The contribution of the risk factors to R was highest for the capacity for prevention and mitigation, followed by the drought hazard, sensitivity and exposure, and environmental vulnerability.
Statistical physics of human cooperation
NASA Astrophysics Data System (ADS)
Perc, Matjaž; Jordan, Jillian J.; Rand, David G.; Wang, Zhen; Boccaletti, Stefano; Szolnoki, Attila
2017-05-01
Extensive cooperation among unrelated individuals is unique to humans, who often sacrifice personal benefits for the common good and work together to achieve what they are unable to execute alone. The evolutionary success of our species is indeed due, to a large degree, to our unparalleled other-regarding abilities. Yet, a comprehensive understanding of human cooperation remains a formidable challenge. Recent research in the social sciences indicates that it is important to focus on the collective behavior that emerges as the result of the interactions among individuals, groups, and even societies. Non-equilibrium statistical physics, in particular Monte Carlo methods and the theory of collective behavior of interacting particles near phase transition points, has proven to be very valuable for understanding counterintuitive evolutionary outcomes. By treating models of human cooperation as classical spin models, a physicist can draw on familiar settings from statistical physics. However, unlike pairwise interactions among particles that typically govern solid-state physics systems, interactions among humans often involve group interactions, and they also involve a larger number of possible states even for the most simplified description of reality. The complexity of solutions therefore often surpasses that observed in physical systems. Here we review experimental and theoretical research that advances our understanding of human cooperation, focusing on spatial pattern formation, on the spatiotemporal dynamics of observed solutions, and on self-organization that may either promote or hinder socially favorable states.
NASA Astrophysics Data System (ADS)
Martinez, Patricia
This thesis describes a research study that resulted in an instructional model directed at helping fourth grade diverse students improve their science knowledge, their reading comprehension, their awareness of the relationship between science and reading, and their ability to transfer strategies. The focus of the instructional model emerged from the intersection of constructs in science and reading literacy; the model identifies cognitive strategies that can be used in science and reading, and inquiry-based instruction related to the science content read by participants. The intervention is termed INSCIREAD (Instruction in Science and Reading). The GoInquire web-based system (2006) was used to develop students' content knowledge in slow landform change. Seventy-eight students participated in the study. The treatment group comprised 49 students without disabilities and 8 students with disabilities. The control group comprised 21 students without disabilities. The design of the study is a combination of a mixed-methods quasi-experimental design (Study 1), and a single subject design with groups as the unit of analysis (Study 2). The results from the quantitative measures demonstrated that the text recall data analysis from Study 1 yielded near significant statistical levels when comparing the performance of students without disabilities in the treatment group to that of the control group. Visual analyses of the results from the text recall data from Study 2 showed at least minimal change in all groups. The results of the data analysis of the level of the generated questions show there was a statistically significant increase in the scores students without disabilities obtained in the questions they generated from the pre to the posttest. The analyses conducted to detect incongruities, to summarize and rate importance, and to determine the number of propositions on a science and reading concept map data showed a statistically significant difference between students without disabilities in the treatment and the control groups on post-intervention scores. The analysis of the data from the number of misconceptions of students without disabilities showed that the frequency of 4 of the 11 misconceptions changed significantly from pre to post elicitation stages. The analyses of the qualitative measures of the think alouds and interviews generally supported the above findings.
NASA Astrophysics Data System (ADS)
Dieppois, B.; Pohl, B.; Eden, J.; Crétat, J.; Rouault, M.; Keenlyside, N.; New, M. G.
2017-12-01
The water management community has hitherto neglected or underestimated many of the uncertainties in climate impact scenarios, in particular, uncertainties associated with decadal climate variability. Uncertainty in the state-of-the-art global climate models (GCMs) is time-scale-dependant, e.g. stronger at decadal than at interannual timescales, in response to the different parameterizations and to internal climate variability. In addition, non-stationarity in statistical downscaling is widely recognized as a key problem, in which time-scale dependency of predictors plays an important role. As with global climate modelling, therefore, the selection of downscaling methods must proceed with caution to avoid unintended consequences of over-correcting the noise in GCMs (e.g. interpreting internal climate variability as a model bias). GCM outputs from the Coupled Model Intercomparison Project 5 (CMIP5) have therefore first been selected based on their ability to reproduce southern African summer rainfall variability and their teleconnections with Pacific sea-surface temperature across the dominant timescales. In observations, southern African summer rainfall has recently been shown to exhibit significant periodicities at the interannual timescale (2-8 years), quasi-decadal (8-13 years) and inter-decadal (15-28 years) timescales, which can be interpret as the signature of ENSO, the IPO, and the PDO over the region. Most of CMIP5 GCMs underestimate southern African summer rainfall variability and their teleconnections with Pacific SSTs at these three timescales. In addition, according to a more in-depth analysis of historical and pi-control runs, this bias is might result from internal climate variability in some of the CMIP5 GCMs, suggesting potential for bias-corrected prediction based empirical statistical downscaling. A multi-timescale regression based downscaling procedure, which determines the predictors across the different timescales, has thus been used to simulate southern African summer rainfall. This multi-timescale procedure shows much better skills in simulating decadal timescales of variability compared to commonly used statistical downscaling approaches.
Making texts in electronic health records comprehensible to consumers: a prototype translator.
Zeng-Treitler, Qing; Goryachev, Sergey; Kim, Hyeoneui; Keselman, Alla; Rosendale, Douglas
2007-10-11
Narrative reports from electronic health records are a major source of content for personal health records. We designed and implemented a prototype text translator to make these reports more comprehensible to consumers. The translator identifies difficult terms, replaces them with easier synonyms, and generates and inserts explanatory texts for them. In feasibility testing, the application was used to translate 9 clinical reports. Majority (68.8%) of text replacements and insertions were deemed correct and helpful by expert review. User evaluation demonstrated a non-statistically significant trend toward better comprehension when translation is provided (p=0.15).
Making Texts in Electronic Health Records Comprehensible to Consumers: A Prototype Translator
Zeng-Treitler, Qing; Goryachev, Sergey; Kim, Hyeoneui; Keselman, Alla; Rosendale, Douglas
2007-01-01
Narrative reports from electronic health records are a major source of content for personal health records. We designed and implemented a prototype text translator to make these reports more comprehensible to consumers. The translator identifies difficult terms, replaces them with easier synonyms, and generates and inserts explanatory texts for them. In feasibility testing, the application was used to translate 9 clinical reports. Majority (68.8%) of text replacements and insertions were deemed correct and helpful by expert review. User evaluation demonstrated a non-statistically significant trend toward better comprehension when translation is provided (p=0.15). PMID:18693956
A pairwise maximum entropy model accurately describes resting-state human brain networks
Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki
2013-01-01
The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks. PMID:23340410
A Framework for Considering Comprehensibility in Modeling
Gleicher, Michael
2016-01-01
Abstract Comprehensibility in modeling is the ability of stakeholders to understand relevant aspects of the modeling process. In this article, we provide a framework to help guide exploration of the space of comprehensibility challenges. We consider facets organized around key questions: Who is comprehending? Why are they trying to comprehend? Where in the process are they trying to comprehend? How can we help them comprehend? How do we measure their comprehension? With each facet we consider the broad range of options. We discuss why taking a broad view of comprehensibility in modeling is useful in identifying challenges and opportunities for solutions. PMID:27441712
WAIS-IV subtest covariance structure: conceptual and statistical considerations.
Ward, L Charles; Bergman, Maria A; Hebert, Katina R
2012-06-01
D. Wechsler (2008b) reported confirmatory factor analyses (CFAs) with standardization data (ages 16-69 years) for 10 core and 5 supplemental subtests from the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). Analyses of the 15 subtests supported 4 hypothesized oblique factors (Verbal Comprehension, Working Memory, Perceptual Reasoning, and Processing Speed) but also revealed unexplained covariance between Block Design and Visual Puzzles (Perceptual Reasoning subtests). That covariance was not included in the final models. Instead, a path was added from Working Memory to Figure Weights (Perceptual Reasoning subtest) to improve fit and achieve a desired factor pattern. The present research with the same data (N = 1,800) showed that the path from Working Memory to Figure Weights increases the association between Working Memory and Matrix Reasoning. Specifying both paths improves model fit and largely eliminates unexplained covariance between Block Design and Visual Puzzles but with the undesirable consequence that Figure Weights and Matrix Reasoning are equally determined by Perceptual Reasoning and Working Memory. An alternative 4-factor model was proposed that explained theory-implied covariance between Block Design and Visual Puzzles and between Arithmetic and Figure Weights while maintaining compatibility with WAIS-IV Index structure. The proposed model compared favorably with a 5-factor model based on Cattell-Horn-Carroll theory. The present findings emphasize that covariance model comparisons should involve considerations of conceptual coherence and theoretical adherence in addition to statistical fit. (c) 2012 APA, all rights reserved
ERIC Educational Resources Information Center
Tobia, Valentina; Ciancaleoni, Matteo; Bonifacci, Paola
2017-01-01
In this study, two alternative theoretical models were compared, in order to analyze which of them best explains primary school children's text comprehension skills. The first one was based on the distinction between two types of answers requested by the comprehension test: local or global. The second model involved texts' input modality: written…
NASA Astrophysics Data System (ADS)
Zhao, Qiang; Gao, Qian; Zhu, Mingyue; Li, Xiumei
2018-06-01
Water resources carrying capacity is the maximum available water resources supporting by the social and economic development. Based on investigating and statisticing on the current situation of water resources in Shandong Province, this paper selects 13 factors including per capita water resources, water resources utilization, water supply modulus, rainfall, per capita GDP, population density, per capita water consumption, water consumption per million yuan, The water consumption of industrial output value, the agricultural output value of farmland, the irrigation rate of cultivated land, the water consumption rate of ecological environment and the forest coverage rate were used as the evaluation factors. Then,the fuzzy comprehensive evaluation model was used to analyze the water resources carrying capacity Force status evaluation. The results showed : The comprehensive evaluation results of water resources in Shandong Province were lower than 0.6 in 2001-2009 and higher than 0.6 in 2010-2015, which indicating that the water resources carrying capacity of Shandong Province has been improved.; In addition, most of the years a value of less than 0.6, individual years below 0.4, the interannual changes are relatively large, from that we can see the level of water resources is generally weak, the greater the interannual changes in Shandong Province.
NASA Astrophysics Data System (ADS)
Kotlarski, Sven; Gutiérrez, José M.; Boberg, Fredrik; Bosshard, Thomas; Cardoso, Rita M.; Herrera, Sixto; Maraun, Douglas; Mezghani, Abdelkader; Pagé, Christian; Räty, Olle; Stepanek, Petr; Soares, Pedro M. M.; Szabo, Peter
2016-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of downscaling methods. Such assessments can be expected to crucially depend on the existence of accurate and reliable observational reference data. In dynamical downscaling, observational data can influence model development itself and, later on, model evaluation, parameter calibration and added value assessment. In empirical-statistical downscaling, observations serve as predictand data and directly influence model calibration with corresponding effects on downscaled climate change projections. We here present a comprehensive assessment of the influence of uncertainties in observational reference data and of scale-related issues on several of the above-mentioned aspects. First, temperature and precipitation characteristics as simulated by a set of reanalysis-driven EURO-CORDEX RCM experiments are validated against three different gridded reference data products, namely (1) the EOBS dataset (2) the recently developed EURO4M-MESAN regional re-analysis, and (3) several national high-resolution and quality-controlled gridded datasets that recently became available. The analysis reveals a considerable influence of the choice of the reference data on the evaluation results, especially for precipitation. It is also illustrated how differences between the reference data sets influence the ranking of RCMs according to a comprehensive set of performance measures.
Loley, Christina; Alver, Maris; Assimes, Themistocles L.; Bjonnes, Andrew; Goel, Anuj; Gustafsson, Stefan; Hernesniemi, Jussi; Hopewell, Jemma C.; Kanoni, Stavroula; Kleber, Marcus E.; Lau, King Wai; Lu, Yingchang; Lyytikäinen, Leo-Pekka; Nelson, Christopher P.; Nikpay, Majid; Qu, Liming; Salfati, Elias; Scholz, Markus; Tukiainen, Taru; Willenborg, Christina; Won, Hong-Hee; Zeng, Lingyao; Zhang, Weihua; Anand, Sonia S.; Beutner, Frank; Bottinger, Erwin P.; Clarke, Robert; Dedoussis, George; Do, Ron; Esko, Tõnu; Eskola, Markku; Farrall, Martin; Gauguier, Dominique; Giedraitis, Vilmantas; Granger, Christopher B.; Hall, Alistair S.; Hamsten, Anders; Hazen, Stanley L.; Huang, Jie; Kähönen, Mika; Kyriakou, Theodosios; Laaksonen, Reijo; Lind, Lars; Lindgren, Cecilia; Magnusson, Patrik K. E.; Marouli, Eirini; Mihailov, Evelin; Morris, Andrew P.; Nikus, Kjell; Pedersen, Nancy; Rallidis, Loukianos; Salomaa, Veikko; Shah, Svati H.; Stewart, Alexandre F. R.; Thompson, John R.; Zalloua, Pierre A.; Chambers, John C.; Collins, Rory; Ingelsson, Erik; Iribarren, Carlos; Karhunen, Pekka J.; Kooner, Jaspal S.; Lehtimäki, Terho; Loos, Ruth J. F.; März, Winfried; McPherson, Ruth; Metspalu, Andres; Reilly, Muredach P.; Ripatti, Samuli; Sanghera, Dharambir K.; Thiery, Joachim; Watkins, Hugh; Deloukas, Panos; Kathiresan, Sekar; Samani, Nilesh J.; Schunkert, Heribert; Erdmann, Jeanette; König, Inke R.
2016-01-01
In recent years, genome-wide association studies have identified 58 independent risk loci for coronary artery disease (CAD) on the autosome. However, due to the sex-specific data structure of the X chromosome, it has been excluded from most of these analyses. While females have 2 copies of chromosome X, males have only one. Also, one of the female X chromosomes may be inactivated. Therefore, special test statistics and quality control procedures are required. Thus, little is known about the role of X-chromosomal variants in CAD. To fill this gap, we conducted a comprehensive X-chromosome-wide meta-analysis including more than 43,000 CAD cases and 58,000 controls from 35 international study cohorts. For quality control, sex-specific filters were used to adequately take the special structure of X-chromosomal data into account. For single study analyses, several logistic regression models were calculated allowing for inactivation of one female X-chromosome, adjusting for sex and investigating interactions between sex and genetic variants. Then, meta-analyses including all 35 studies were conducted using random effects models. None of the investigated models revealed genome-wide significant associations for any variant. Although we analyzed the largest-to-date sample, currently available methods were not able to detect any associations of X-chromosomal variants with CAD. PMID:27731410
On Statistical Modeling of Sequencing Noise in High Depth Data to Assess Tumor Evolution
NASA Astrophysics Data System (ADS)
Rabadan, Raul; Bhanot, Gyan; Marsilio, Sonia; Chiorazzi, Nicholas; Pasqualucci, Laura; Khiabanian, Hossein
2018-07-01
One cause of cancer mortality is tumor evolution to therapy-resistant disease. First line therapy often targets the dominant clone, and drug resistance can emerge from preexisting clones that gain fitness through therapy-induced natural selection. Such mutations may be identified using targeted sequencing assays by analysis of noise in high-depth data. Here, we develop a comprehensive, unbiased model for sequencing error background. We find that noise in sufficiently deep DNA sequencing data can be approximated by aggregating negative binomial distributions. Mutations with frequencies above noise may have prognostic value. We evaluate our model with simulated exponentially expanded populations as well as data from cell line and patient sample dilution experiments, demonstrating its utility in prognosticating tumor progression. Our results may have the potential to identify significant mutations that can cause recurrence. These results are relevant in the pretreatment clinical setting to determine appropriate therapy and prepare for potential recurrence pretreatment.
NASA Technical Reports Server (NTRS)
Phillips, T. J.
1984-01-01
The heating associated with equatorial, subtropical, and midlatitude ocean temperature anamolies in the Held-Suarez climate model is analyzed. The local and downstream response to the anomalies is analyzed, first by examining the seasonal variation in heating associated with each ocean temperature anomaly, and then by combining knowledge of the heating with linear dynamical theory in order to develop a more comprehensive explanation of the seasonal variation in local and downstream atmospheric response to each anomaly. The extent to which the linear theory of propagating waves can assist the interpretation of the remote cross-latitudinal response of the model to the ocean temperature anomalies is considered. Alternative hypotheses that attempt to avoid the contradictions inherent in a strict application of linear theory are investigated, and the impact of sampling errors on the assessment of statistical significance is also examined.
Robinson, John W
2012-03-01
Propensity score models are increasingly used in observational comparative effectiveness studies to reduce confounding by covariates that are associated with both a study outcome and treatment choice. Any such potentially confounding covariate will bias estimation of the effect of treatment on the outcome, unless the distribution of that covariate is well-balanced between treatment and control groups. Constructing a subsample of treated and control subjects who are matched on estimated propensity scores is a means of achieving such balance for covariates that are included in the propensity score model. If, during study design, investigators assemble a comprehensive inventory of known and suspected potentially confounding covariates, examination of how well this inventory is covered by the chosen dataset yields an assessment of the extent of bias reduction that is possible by matching on estimated propensity scores. These considerations are explored by examining the designs of three recently published comparative effectiveness studies.
On Statistical Modeling of Sequencing Noise in High Depth Data to Assess Tumor Evolution
NASA Astrophysics Data System (ADS)
Rabadan, Raul; Bhanot, Gyan; Marsilio, Sonia; Chiorazzi, Nicholas; Pasqualucci, Laura; Khiabanian, Hossein
2017-12-01
One cause of cancer mortality is tumor evolution to therapy-resistant disease. First line therapy often targets the dominant clone, and drug resistance can emerge from preexisting clones that gain fitness through therapy-induced natural selection. Such mutations may be identified using targeted sequencing assays by analysis of noise in high-depth data. Here, we develop a comprehensive, unbiased model for sequencing error background. We find that noise in sufficiently deep DNA sequencing data can be approximated by aggregating negative binomial distributions. Mutations with frequencies above noise may have prognostic value. We evaluate our model with simulated exponentially expanded populations as well as data from cell line and patient sample dilution experiments, demonstrating its utility in prognosticating tumor progression. Our results may have the potential to identify significant mutations that can cause recurrence. These results are relevant in the pretreatment clinical setting to determine appropriate therapy and prepare for potential recurrence pretreatment.
A weighted U-statistic for genetic association analyses of sequencing data.
Wei, Changshuai; Li, Ming; He, Zihuai; Vsevolozhskaya, Olga; Schaid, Daniel J; Lu, Qing
2014-12-01
With advancements in next-generation sequencing technology, a massive amount of sequencing data is generated, which offers a great opportunity to comprehensively investigate the role of rare variants in the genetic etiology of complex diseases. Nevertheless, the high-dimensional sequencing data poses a great challenge for statistical analysis. The association analyses based on traditional statistical methods suffer substantial power loss because of the low frequency of genetic variants and the extremely high dimensionality of the data. We developed a Weighted U Sequencing test, referred to as WU-SEQ, for the high-dimensional association analysis of sequencing data. Based on a nonparametric U-statistic, WU-SEQ makes no assumption of the underlying disease model and phenotype distribution, and can be applied to a variety of phenotypes. Through simulation studies and an empirical study, we showed that WU-SEQ outperformed a commonly used sequence kernel association test (SKAT) method when the underlying assumptions were violated (e.g., the phenotype followed a heavy-tailed distribution). Even when the assumptions were satisfied, WU-SEQ still attained comparable performance to SKAT. Finally, we applied WU-SEQ to sequencing data from the Dallas Heart Study (DHS), and detected an association between ANGPTL 4 and very low density lipoprotein cholesterol. © 2014 WILEY PERIODICALS, INC.
Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses
Liu, Ruijie; Holik, Aliaksei Z.; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E.; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.; Ritchie, Matthew E.
2015-01-01
Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean–variance relationship of the log-counts-per-million using ‘voom’. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source ‘limma’ package. PMID:25925576
NASA Astrophysics Data System (ADS)
Ahmad, M. F.; Rasi, R. Z.; Zakuan, N.; Hisyamudin, M. N. N.
2015-12-01
In today's highly competitive market, Total Quality Management (TQM) is vital management tool in ensuring a company can success in their business. In order to survive in the global market with intense competition amongst regions and enterprises, the adoption of tools and techniques are essential in improving business performance. There are consistent results between TQM and business performance. However, only few previous studies have examined the mediator effect namely statistical process control (SPC) between TQM and business performance. A mediator is a third variable that changes the association between an independent variable and an outcome variable. This study present research proposed a TQM performance model with mediator effect of SPC with structural equation modelling, which is a more comprehensive model for developing countries, specifically for Malaysia. A questionnaire was prepared and sent to 1500 companies from automotive industry and the related vendors in Malaysia, giving a 21.8 per cent rate. Attempts were made at findings significant impact of mediator between TQM practices and business performance showed that SPC is important tools and techniques in TQM implementation. The result concludes that SPC is partial correlation between and TQM and BP with indirect effect (IE) is 0.25 which can be categorised as high moderator effect.
3Drefine: an interactive web server for efficient protein structure refinement
Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin
2016-01-01
3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. PMID:27131371
Operational Consequences of Literacy Gap.
1980-05-01
Comprehension Scores on the Safety and Sanitation Content 37 11. Statistics on Experimental Groups’ Performance by Sex and Content 37 12. Analysis of...Variance of Experimental Groups by Sex and Content 38 13. Mean Comprehension Scores Broken Down by Content, Subject RGL and Reading Time 39 14. Analysis...ratings along a scale of difficulty which parallels the school grade scale. Burkett (1975) and Klare (1963; 1974-1975) provide summaries of the extensive
Analyzing and Integrating Models of Multiple Text Comprehension
ERIC Educational Resources Information Center
List, Alexandra; Alexander, Patricia A.
2017-01-01
We introduce a special issue featuring four theoretical models of multiple text comprehension. We present a central framework for conceptualizing the four models in this special issue. Specifically, we chart the models according to how they consider learner, texts, task, and context factors in explaining multiple text comprehension. In addition,…
Spatial Ensemble Postprocessing of Precipitation Forecasts Using High Resolution Analyses
NASA Astrophysics Data System (ADS)
Lang, Moritz N.; Schicker, Irene; Kann, Alexander; Wang, Yong
2017-04-01
Ensemble prediction systems are designed to account for errors or uncertainties in the initial and boundary conditions, imperfect parameterizations, etc. However, due to sampling errors and underestimation of the model errors, these ensemble forecasts tend to be underdispersive, and to lack both reliability and sharpness. To overcome such limitations, statistical postprocessing methods are commonly applied to these forecasts. In this study, a full-distributional spatial post-processing method is applied to short-range precipitation forecasts over Austria using Standardized Anomaly Model Output Statistics (SAMOS). Following Stauffer et al. (2016), observation and forecast fields are transformed into standardized anomalies by subtracting a site-specific climatological mean and dividing by the climatological standard deviation. Due to the need of fitting only a single regression model for the whole domain, the SAMOS framework provides a computationally inexpensive method to create operationally calibrated probabilistic forecasts for any arbitrary location or for all grid points in the domain simultaneously. Taking advantage of the INCA system (Integrated Nowcasting through Comprehensive Analysis), high resolution analyses are used for the computation of the observed climatology and for model training. The INCA system operationally combines station measurements and remote sensing data into real-time objective analysis fields at 1 km-horizontal resolution and 1 h-temporal resolution. The precipitation forecast used in this study is obtained from a limited area model ensemble prediction system also operated by ZAMG. The so called ALADIN-LAEF provides, by applying a multi-physics approach, a 17-member forecast at a horizontal resolution of 10.9 km and a temporal resolution of 1 hour. The performed SAMOS approach statistically combines the in-house developed high resolution analysis and ensemble prediction system. The station-based validation of 6 hour precipitation sums shows a mean improvement of more than 40% in CRPS when compared to bilinearly interpolated uncalibrated ensemble forecasts. The validation on randomly selected grid points, representing the true height distribution over Austria, still indicates a mean improvement of 35%. The applied statistical model is currently set up for 6-hourly and daily accumulation periods, but will be extended to a temporal resolution of 1-3 hours within a new probabilistic nowcasting system operated by ZAMG.
Si, Yan; Guo, Yan; Yuan, Chao; Xu, Tao; Zheng, Shu Guo
2016-03-01
To explore the effectiveness of comprehensive oral health care to reduce the caries incidence for children with severe early childhood caries (s-ECC) in an urban area in China. A total of 357 children aged 3 to 4 years old and diagnosed with s-ECC were recruited in this randomised controlled, single-blinded clinical trial for 1 year. Children of two different kindergarten classes were enrolled in this study and randomly divided into a test group (205 children) and a control group (152 children). The test group received comprehensive oral health care, which included: oral health examination, oral health education, topical fluoride application and dental treatment, and the children in the control group only received the oral health examination. The evaluation of the oral health questionnaire for parents was also performed. An evaluation was carried out at the time of recruitment and 1 year later to explore the effectiveness of the comprehensive oral health care model. The differences in decayed teeth (dt), decayed tooth surfaces (ds), filled teeth (ft), filled tooth surfaces (fs) and the ratio of ft /(dt + ft) between the two groups were statistically significant (P < 0.001) at 1 year. The incidence of caries in the control group was higher than that of the test group (P = 0.02). The rate of awareness of oral health knowledge (P = 0.01) and the practice of good diet habits (P = 0.02) by parents in the test group were significantly higher than those in the control group. The present study demonstrated that the comprehensive oral health care program reduces and prevents caries amongst children with s-ECC.
World War II War Production-Why Were the B-17 and B-24 Produced in Parallel?
1997-03-01
Winton, A Black Hole in the Wild Blue Yonder: The Need for a Comprehensive Theory of Airpower (Air Command and Staff College War Theory Coursebook ... statistical comparisons made, of which most are summarized as follows2: 1. Statistical data compiled on the utilization of both planes showed that the B-17 was...easier to maintain and therefore more available for combat. 2. Statistical data on time from aircraft acceptance to delivery in theater showed that
Predictors of nutrition information comprehension in adulthood.
Miller, Lisa M Soederberg; Gibson, Tanja N; Applegate, Elizabeth A
2010-07-01
The goal of the present study was to examine relationships among several predictors of nutrition comprehension. We were particularly interested in exploring whether nutrition knowledge or motivation moderated the effects of attention on comprehension across a wide age range of adults. Ninety-three participants, ages 18-80, completed measures of nutrition knowledge and motivation and then read nutrition information (from which attention allocation was derived) and answered comprehension questions. In general, predictor variables were highly intercorrelated. However, knowledge, but not motivation, had direct effects on comprehension accuracy. In contrast, motivation influenced attention, which in turn influenced accuracy. Results also showed that comprehension accuracy decreased-and knowledge increased-with age. When knowledge was statistically controlled, age declines in comprehension increased. Knowledge is an important predictor of nutrition information comprehension and its role increases in later life. Motivation is also important; however, its effects on comprehension differ from knowledge. Health educators and clinicians should consider cognitive skills such as knowledge as well as motivation and age of patients when deciding how to best convey health information. The increased role of knowledge among older adults suggests that lifelong educational efforts may have important payoffs in later life. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
Predictors of Nutrition Information Comprehension in Adulthood
Miller, Lisa M. Soederberg; Gibson, Tanja N.; Applegate, Elizabeth A.
2009-01-01
Objective The goal of the present study was to examine relationships among several predictors of nutrition comprehension. We were particularly interested in exploring whether nutrition knowledge or motivation moderated the effects of attention on comprehension across a wide age range of adults. Methods Ninety-three participants, ages 18 to 80, completed measures of nutrition knowledge and motivation and then read nutrition information (from which attention allocation was derived) and answered comprehension questions. Results In general, predictor variables were highly intercorrelated. However, knowledge, but not motivation, had direct effects on comprehension accuracy. In contrast, motivation influenced attention, which in turn influenced accuracy. Results also showed that comprehension accuracy decreased- and knowledge increased -with age. When knowledge was statistically controlled, age declines in comprehension increased. Conclusion Knowledge is an important predictor of nutrition information comprehension and its role increases in later life. Motivation is also important; however, its effects on comprehension differ from knowledge. Practice Implications Health educators and clinicians should consider cognitive skills such as knowledge as well as motivation and age of patients when deciding how to best convey health information. The increased role of knowledge among older adults suggests that lifelong educational efforts may have important payoffs in later life. PMID:19854605
Jarman, Lisa; Martin, Angela; Venn, Alison; Otahal, Petr; Sanderson, Kristy
2015-12-24
Workplace health promotion (WHP) has been proposed as a preventive intervention for job stress, possibly operating by promoting positive organizational culture or via programs promoting healthy lifestyles. The aim of this study was to investigate whether job stress changed over time in association with the availability of, and/or participation in a comprehensive WHP program (Healthy@Work). This observational study was conducted in a diverse public sector organization (~28,000 employees). Using a repeated cross-sectional design with models corroborated using a cohort of repeat responders, self-report survey data were collected via a 40 % employee population random sample in 2010 (N = 3406) and 2013 (N = 3228). Outcomes assessed were effort and reward (self-esteem) components of the effort-reward imbalance (ERI) measure of job stress. Exposures were availability of, and participation in, comprehensive WHP. Linear mixed models and Poisson regression were used, with analyses stratified by sex and weighted for non-response. Higher WHP availability was positively associated with higher perceived self-esteem among women. Women's mean reward scores increased over time but were not statistically different (p > 0.05) after 3 years. For men, higher WHP participation was associated with lower perceived effort. Men's mean ERI increased over time. Results were supported in the cohort group. For women, comprehensive WHP availability contributed to a sense of organizational support, potentially impacting the esteem component of reward. Men with higher WHP participation also benefitted but gains were modest over time and may have been hindered by other work environment factors.
Dosimetric treatment course simulation based on a statistical model of deformable organ motion
NASA Astrophysics Data System (ADS)
Söhn, M.; Sobotta, B.; Alber, M.
2012-06-01
We present a method of modeling dosimetric consequences of organ deformation and correlated motion of adjacent organ structures in radiotherapy. Based on a few organ geometry samples and the respective deformation fields as determined by deformable registration, principal component analysis (PCA) is used to create a low-dimensional parametric statistical organ deformation model (Söhn et al 2005 Phys. Med. Biol. 50 5893-908). PCA determines the most important geometric variability in terms of eigenmodes, which represent 3D vector fields of correlated organ deformations around the mean geometry. Weighted sums of a few dominating eigenmodes can be used to simulate synthetic geometries, which are statistically meaningful inter- and extrapolations of the input geometries, and predict their probability of occurrence. We present the use of PCA as a versatile treatment simulation tool, which allows comprehensive dosimetric assessment of the detrimental effects that deformable geometric uncertainties can have on a planned dose distribution. For this, a set of random synthetic geometries is generated by a PCA model for each simulated treatment course, and the dose of a given treatment plan is accumulated in the moving tissue elements via dose warping. This enables the calculation of average voxel doses, local dose variability, dose-volume histogram uncertainties, marginal as well as joint probability distributions of organ equivalent uniform doses and thus of TCP and NTCP, and other dosimetric and biologic endpoints. The method is applied to the example of deformable motion of prostate/bladder/rectum in prostate IMRT. Applications include dosimetric assessment of the adequacy of margin recipes, adaptation schemes, etc, as well as prospective ‘virtual’ evaluation of the possible benefits of new radiotherapy schemes.
Dosimetric treatment course simulation based on a statistical model of deformable organ motion.
Söhn, M; Sobotta, B; Alber, M
2012-06-21
We present a method of modeling dosimetric consequences of organ deformation and correlated motion of adjacent organ structures in radiotherapy. Based on a few organ geometry samples and the respective deformation fields as determined by deformable registration, principal component analysis (PCA) is used to create a low-dimensional parametric statistical organ deformation model (Söhn et al 2005 Phys. Med. Biol. 50 5893-908). PCA determines the most important geometric variability in terms of eigenmodes, which represent 3D vector fields of correlated organ deformations around the mean geometry. Weighted sums of a few dominating eigenmodes can be used to simulate synthetic geometries, which are statistically meaningful inter- and extrapolations of the input geometries, and predict their probability of occurrence. We present the use of PCA as a versatile treatment simulation tool, which allows comprehensive dosimetric assessment of the detrimental effects that deformable geometric uncertainties can have on a planned dose distribution. For this, a set of random synthetic geometries is generated by a PCA model for each simulated treatment course, and the dose of a given treatment plan is accumulated in the moving tissue elements via dose warping. This enables the calculation of average voxel doses, local dose variability, dose-volume histogram uncertainties, marginal as well as joint probability distributions of organ equivalent uniform doses and thus of TCP and NTCP, and other dosimetric and biologic endpoints. The method is applied to the example of deformable motion of prostate/bladder/rectum in prostate IMRT. Applications include dosimetric assessment of the adequacy of margin recipes, adaptation schemes, etc, as well as prospective 'virtual' evaluation of the possible benefits of new radiotherapy schemes.
Zhao, Yufeng; Xie, Qi; He, Liyun; Liu, Baoyan; Li, Kun; Zhang, Xiang; Bai, Wenjing; Luo, Lin; Jing, Xianghong; Huo, Ruili
2014-10-01
To help researchers selecting appropriate data mining models to provide better evidence for the clinical practice of Traditional Chinese Medicine (TCM) diagnosis and therapy. Clinical issues based on data mining models were comprehensively summarized from four significant elements of the clinical studies: symptoms, symptom patterns, herbs, and efficacy. Existing problems were further generalized to determine the relevant factors of the performance of data mining models, e.g. data type, samples, parameters, variable labels. Combining these relevant factors, the TCM clinical data features were compared with regards to statistical characters and informatics properties. Data models were compared simultaneously from the view of applied conditions and suitable scopes. The main application problems were the inconsistent data type and the small samples for the used data mining models, which caused the inappropriate results, even the mistake results. These features, i.e. advantages, disadvantages, satisfied data types, tasks of data mining, and the TCM issues, were summarized and compared. By aiming at the special features of different data mining models, the clinical doctors could select the suitable data mining models to resolve the TCM problem.
Tang, Qi-Yi; Zhang, Chuan-Xi
2013-04-01
A comprehensive but simple-to-use software package called DPS (Data Processing System) has been developed to execute a range of standard numerical analyses and operations used in experimental design, statistics and data mining. This program runs on standard Windows computers. Many of the functions are specific to entomological and other biological research and are not found in standard statistical software. This paper presents applications of DPS to experimental design, statistical analysis and data mining in entomology. © 2012 The Authors Insect Science © 2012 Institute of Zoology, Chinese Academy of Sciences.
ERIC Educational Resources Information Center
Wine, Jennifer; Bryan, Michael; Siegel, Peter
2013-01-01
The National Postsecondary Student Aid Study (NPSAS) helps fulfill the U.S. Department of Education's National Center for Education Statistics (NCES) mandate to collect, analyze, and publish statistics related to education. The purpose of NPSAS is to compile a comprehensive research dataset, based on student-level records, on financial aid…
ERIC Educational Resources Information Center
Wine, Jennifer; Bryan, Michael; Siegel, Peter
2013-01-01
The National Postsecondary Student Aid Study (NPSAS) helps fulfill the U.S. Department of Education's National Center for Education Statistics (NCES) mandate to collect, analyze, and publish statistics related to education. The purpose of NPSAS is to compile a comprehensive research dataset, based on student-level records, on financial aid…
Evaluating the consistency of gene sets used in the analysis of bacterial gene expression data.
Tintle, Nathan L; Sitarik, Alexandra; Boerema, Benjamin; Young, Kylie; Best, Aaron A; Dejongh, Matthew
2012-08-08
Statistical analyses of whole genome expression data require functional information about genes in order to yield meaningful biological conclusions. The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) are common sources of functionally grouped gene sets. For bacteria, the SEED and MicrobesOnline provide alternative, complementary sources of gene sets. To date, no comprehensive evaluation of the data obtained from these resources has been performed. We define a series of gene set consistency metrics directly related to the most common classes of statistical analyses for gene expression data, and then perform a comprehensive analysis of 3581 Affymetrix® gene expression arrays across 17 diverse bacteria. We find that gene sets obtained from GO and KEGG demonstrate lower consistency than those obtained from the SEED and MicrobesOnline, regardless of gene set size. Despite the widespread use of GO and KEGG gene sets in bacterial gene expression data analysis, the SEED and MicrobesOnline provide more consistent sets for a wide variety of statistical analyses. Increased use of the SEED and MicrobesOnline gene sets in the analysis of bacterial gene expression data may improve statistical power and utility of expression data.
For US Students, L2 Reading Comprehension Is Hard Because L2 Listening Comprehension Is Hard, Too
ERIC Educational Resources Information Center
Sparks, Richard; Patton, Jon; Luebbers, Julie
2018-01-01
The Simple View of Reading (SVR) model posits that reading is the product of word decoding and language comprehension and that oral language (listening) comprehension is the best predictor of reading comprehension once word-decoding skill has been established. The SVR model also proposes that there are good readers and three types of poor…
Hollings, Tracey; Robinson, Andrew; van Andel, Mary; Jewell, Chris; Burgman, Mark
2017-01-01
In livestock industries, reliable up-to-date spatial distribution and abundance records for animals and farms are critical for governments to manage and respond to risks. Yet few, if any, countries can afford to maintain comprehensive, up-to-date agricultural census data. Statistical modelling can be used as a proxy for such data but comparative modelling studies have rarely been undertaken for livestock populations. Widespread species, including livestock, can be difficult to model effectively due to complex spatial distributions that do not respond predictably to environmental gradients. We assessed three machine learning species distribution models (SDM) for their capacity to estimate national-level farm animal population numbers within property boundaries: boosted regression trees (BRT), random forests (RF) and K-nearest neighbour (K-NN). The models were built from a commercial livestock database and environmental and socio-economic predictor data for New Zealand. We used two spatial data stratifications to test (i) support for decision making in an emergency response situation, and (ii) the ability for the models to predict to new geographic regions. The performance of the three model types varied substantially, but the best performing models showed very high accuracy. BRTs had the best performance overall, but RF performed equally well or better in many simulations; RFs were superior at predicting livestock numbers for all but very large commercial farms. K-NN performed poorly relative to both RF and BRT in all simulations. The predictions of both multi species and single species models for farms and within hypothetical quarantine zones were very close to observed data. These models are generally applicable for livestock estimation with broad applications in disease risk modelling, biosecurity, policy and planning.
Robinson, Andrew; van Andel, Mary; Jewell, Chris; Burgman, Mark
2017-01-01
In livestock industries, reliable up-to-date spatial distribution and abundance records for animals and farms are critical for governments to manage and respond to risks. Yet few, if any, countries can afford to maintain comprehensive, up-to-date agricultural census data. Statistical modelling can be used as a proxy for such data but comparative modelling studies have rarely been undertaken for livestock populations. Widespread species, including livestock, can be difficult to model effectively due to complex spatial distributions that do not respond predictably to environmental gradients. We assessed three machine learning species distribution models (SDM) for their capacity to estimate national-level farm animal population numbers within property boundaries: boosted regression trees (BRT), random forests (RF) and K-nearest neighbour (K-NN). The models were built from a commercial livestock database and environmental and socio-economic predictor data for New Zealand. We used two spatial data stratifications to test (i) support for decision making in an emergency response situation, and (ii) the ability for the models to predict to new geographic regions. The performance of the three model types varied substantially, but the best performing models showed very high accuracy. BRTs had the best performance overall, but RF performed equally well or better in many simulations; RFs were superior at predicting livestock numbers for all but very large commercial farms. K-NN performed poorly relative to both RF and BRT in all simulations. The predictions of both multi species and single species models for farms and within hypothetical quarantine zones were very close to observed data. These models are generally applicable for livestock estimation with broad applications in disease risk modelling, biosecurity, policy and planning. PMID:28837685
Maurer, Christian; Baré, Jonathan; Kusmierczyk-Michulec, Jolanta; ...
2018-03-08
After performing a first multi-model exercise in 2015 a comprehensive and technically more demanding atmospheric transport modelling challenge was organized in 2016. Release data were provided by the Australian Nuclear Science and Technology Organization radiopharmaceutical facility in Sydney (Australia) for a one month period. Measured samples for the same time frame were gathered from six International Monitoring System stations in the Southern Hemisphere with distances to the source ranging between 680 (Melbourne) and about 17,000 km (Tristan da Cunha). Participants were prompted to work with unit emissions in pre-defined emission intervals (daily, half-daily, 3-hourly and hourly emission segment lengths) andmore » in order to perform a blind test actual emission values were not provided to them. Despite the quite different settings of the two atmospheric transport modelling challenges there is common evidence that for long-range atmospheric transport using temporally highly resolved emissions and highly space-resolved meteorological input fields has no significant advantage compared to using lower resolved ones. As well an uncertainty of up to 20% in the daily stack emission data turns out to be acceptable for the purpose of a study like this. Model performance at individual stations is quite diverse depending largely on successfully capturing boundary layer processes. No single model-meteorology combination performs best for all stations. Moreover, the stations statistics do not depend on the distance between the source and the individual stations. Finally, it became more evident how future exercises need to be designed. Set-up parameters like the meteorological driver or the output grid resolution should be pre-scribed in order to enhance diversity as well as comparability among model runs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, Christian; Baré, Jonathan; Kusmierczyk-Michulec, Jolanta
After performing a first multi-model exercise in 2015 a comprehensive and technically more demanding atmospheric transport modelling challenge was organized in 2016. Release data were provided by the Australian Nuclear Science and Technology Organization radiopharmaceutical facility in Sydney (Australia) for a one month period. Measured samples for the same time frame were gathered from six International Monitoring System stations in the Southern Hemisphere with distances to the source ranging between 680 (Melbourne) and about 17,000 km (Tristan da Cunha). Participants were prompted to work with unit emissions in pre-defined emission intervals (daily, half-daily, 3-hourly and hourly emission segment lengths) andmore » in order to perform a blind test actual emission values were not provided to them. Despite the quite different settings of the two atmospheric transport modelling challenges there is common evidence that for long-range atmospheric transport using temporally highly resolved emissions and highly space-resolved meteorological input fields has no significant advantage compared to using lower resolved ones. As well an uncertainty of up to 20% in the daily stack emission data turns out to be acceptable for the purpose of a study like this. Model performance at individual stations is quite diverse depending largely on successfully capturing boundary layer processes. No single model-meteorology combination performs best for all stations. Moreover, the stations statistics do not depend on the distance between the source and the individual stations. Finally, it became more evident how future exercises need to be designed. Set-up parameters like the meteorological driver or the output grid resolution should be pre-scribed in order to enhance diversity as well as comparability among model runs.« less
Tighe, Elizabeth L.; Wagner, Richard K.; Schatschneider, Christopher
2015-01-01
This study demonstrates the utility of applying a causal indicator modeling framework to investigate important predictors of reading comprehension in third, seventh, and tenth grade students. The results indicated that a 4-factor multiple indicator multiple indicator cause (MIMIC) model of reading comprehension provided adequate fit at each grade level. This model included latent predictor constructs of decoding, verbal reasoning, nonverbal reasoning, and working memory and accounted for a large portion of the reading comprehension variance (73% to 87%) across grade levels. Verbal reasoning contributed the most unique variance to reading comprehension at all grade levels. In addition, we fit a multiple group 4-factor MIMIC model to investigate the relative stability (or variability) of the predictor contributions to reading comprehension across development (i.e., grade levels). The results revealed that the contributions of verbal reasoning, nonverbal reasoning, and working memory to reading comprehension were stable across the three grade levels. Decoding was the only predictor that could not be constrained to be equal across grade levels. The contribution of decoding skills to reading comprehension was higher in third grade and then remained relatively stable between seventh and tenth grade. These findings illustrate the feasibility of using MIMIC models to explain individual differences in reading comprehension across the development of reading skills. PMID:25821346
A Systematic Review of Global Drivers of Ant Elevational Diversity
Szewczyk, Tim; McCain, Christy M.
2016-01-01
Ant diversity shows a variety of patterns across elevational gradients, though the patterns and drivers have not been evaluated comprehensively. In this systematic review and reanalysis, we use published data on ant elevational diversity to detail the observed patterns and to test the predictions and interactions of four major diversity hypotheses: thermal energy, the mid-domain effect, area, and the elevational climate model. Of sixty-seven published datasets from the literature, only those with standardized, comprehensive sampling were used. Datasets included both local and regional ant diversity and spanned 80° in latitude across six biogeographical provinces. We used a combination of simulations, linear regressions, and non-parametric statistics to test multiple quantitative predictions of each hypothesis. We used an environmentally and geometrically constrained model as well as multiple regression to test their interactions. Ant diversity showed three distinct patterns across elevations: most common were hump-shaped mid-elevation peaks in diversity, followed by low-elevation plateaus and monotonic decreases in the number of ant species. The elevational climate model, which proposes that temperature and precipitation jointly drive diversity, and area were partially supported as independent drivers. Thermal energy and the mid-domain effect were not supported as primary drivers of ant diversity globally. The interaction models supported the influence of multiple drivers, though not a consistent set. In contrast to many vertebrate taxa, global ant elevational diversity patterns appear more complex, with the best environmental model contingent on precipitation levels. Differences in ecology and natural history among taxa may be crucial to the processes influencing broad-scale diversity patterns. PMID:27175999
Rhodes, Lindsay A; Huisingh, Carrie E; Quinn, Adam E; McGwin, Gerald; LaRussa, Frank; Box, Daniel; Owsley, Cynthia; Girkin, Christopher A
2017-02-01
To examine if racial differences in Bruch's membrane opening minimum rim width (BMO-MRW) in spectral-domain optical coherence tomography (SDOCT) exist, specifically between people of African descent (AD) and European descent (ED) in normal ocular health. Cross-sectional study. Patients presenting for a comprehensive eye examination at retail-based primary eye clinics were enrolled based on ≥1 of the following at-risk criteria for glaucoma: AD aged ≥40 years, ED aged ≥50 years, diabetes, family history of glaucoma, and/or pre-existing diagnosis of glaucoma. Participants with normal optic nerves on examination received SDOCT of the optic nerve head (24 radial scans). Global and regional (temporal, superotemporal, inferotemporal, nasal, superonasal, and inferonasal) BMO-MRW were measured and compared by race using generalized estimating equations. Models were adjusted for age, sex, and BMO area. SDOCT scans from 269 eyes (148 participants) were included in the analysis. Mean global BMO-MRW declined as age increased. After adjusting for age, sex, and BMO area, there was not a statistically significant difference in mean global BMO-MRW by race (P = .60). Regionally, the mean BMO-MRW was lower in the crude model among AD eyes in the temporal, superotemporal, and nasal regions and higher in the inferotemporal, superonasal, and inferonasal regions. However, in the adjusted model, these differences were not statistically significant. BMO-MRW was not statistically different between those of AD and ED. Race-specific normative data may not be necessary for the deployment of BMO-MRW in AD patients. Copyright © 2016 Elsevier Inc. All rights reserved.
Rhodes, Lindsay A.; Huisingh, Carrie E.; Quinn, Adam E.; McGwin, Gerald; LaRussa, Frank; Box, Daniel; Owsley, Cynthia; Girkin, Christopher A.
2016-01-01
Purpose To examine if racial differences in Bruch's membrane opening-minimum rim width (BMO-MRW) in spectral domain optical coherence tomography (SDOCT) exist, specifically between people of African descent (AD) and European descent (ED) in normal ocular health. Design Cross-sectional study Methods Patients presenting for a comprehensive eye exam at retail-based primary eye clinics were enrolled based on ≥1 of the following at-risk criteria for glaucoma: AD aged ≥ 40 years, ED aged ≥50 years, diabetes, family history of glaucoma, and/or preexisting diagnosis of glaucoma. Participants with normal optic nerves on exam received SDOCT of the optic nerve head (24 radial scans). Global and regional (temporal, superotemporal, inferotemporal, nasal, superonasal, and inferonasal) BMO-MRW were measured and compared by race using generalized estimating equations. Models were adjusted for age, gender, and BMO area. Results SDOCT scans from 269 eyes (148 participants) were included in the analysis. Mean global BMO-MRW declined as age increased. After adjusting for age, gender, and BMO area, there was not a statistically significant difference in mean global BMO-MRW by race (P = 0.60). Regionally, the mean BMO-MRW was lower in the crude model among AD eyes in the temporal, superotemporal, and nasal regions and higher in the inferotemporal, superonasal, and inferonasal regions. However, in the adjusted model, these differences were not statistically significant. Conclusions BMO-MRW was not statistically different between those of AD and ED. Race-specific normative data may not be necessary for the deployment of BMO-MRW in AD patients. PMID:27825982
Application of the IRT and TRT Models to a Reading Comprehension Test
ERIC Educational Resources Information Center
Kim, Weon H.
2017-01-01
The purpose of the present study is to apply the item response theory (IRT) and testlet response theory (TRT) models to a reading comprehension test. This study applied the TRT models and the traditional IRT model to a seventh-grade reading comprehension test (n = 8,815) with eight testlets. These three models were compared to determine the best…
A comprehensive mechanistic model for upward two-phase flow in wellbores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sylvester, N.D.; Sarica, C.; Shoham, O.
1994-05-01
A comprehensive model is formulated to predict the flow behavior for upward two-phase flow. This model is composed of a model for flow-pattern prediction and a set of independent mechanistic models for predicting such flow characteristics as holdup and pressure drop in bubble, slug, and annular flow. The comprehensive model is evaluated by using a well data bank made up of 1,712 well cases covering a wide variety of field data. Model performance is also compared with six commonly used empirical correlations and the Hasan-Kabir mechanistic model. Overall model performance is in good agreement with the data. In comparison withmore » other methods, the comprehensive model performed the best.« less
Mayo, Charles S; Yao, John; Eisbruch, Avraham; Balter, James M; Litzenberg, Dale W; Matuszak, Martha M; Kessler, Marc L; Weyburn, Grant; Anderson, Carlos J; Owen, Dawn; Jackson, William C; Haken, Randall Ten
2017-01-01
To develop statistical dose-volume histogram (DVH)-based metrics and a visualization method to quantify the comparison of treatment plans with historical experience and among different institutions. The descriptive statistical summary (ie, median, first and third quartiles, and 95% confidence intervals) of volume-normalized DVH curve sets of past experiences was visualized through the creation of statistical DVH plots. Detailed distribution parameters were calculated and stored in JavaScript Object Notation files to facilitate management, including transfer and potential multi-institutional comparisons. In the treatment plan evaluation, structure DVH curves were scored against computed statistical DVHs and weighted experience scores (WESs). Individual, clinically used, DVH-based metrics were integrated into a generalized evaluation metric (GEM) as a priority-weighted sum of normalized incomplete gamma functions. Historical treatment plans for 351 patients with head and neck cancer, 104 with prostate cancer who were treated with conventional fractionation, and 94 with liver cancer who were treated with stereotactic body radiation therapy were analyzed to demonstrate the usage of statistical DVH, WES, and GEM in a plan evaluation. A shareable dashboard plugin was created to display statistical DVHs and integrate GEM and WES scores into a clinical plan evaluation within the treatment planning system. Benchmarking with normal tissue complication probability scores was carried out to compare the behavior of GEM and WES scores. DVH curves from historical treatment plans were characterized and presented, with difficult-to-spare structures (ie, frequently compromised organs at risk) identified. Quantitative evaluations by GEM and/or WES compared favorably with the normal tissue complication probability Lyman-Kutcher-Burman model, transforming a set of discrete threshold-priority limits into a continuous model reflecting physician objectives and historical experience. Statistical DVH offers an easy-to-read, detailed, and comprehensive way to visualize the quantitative comparison with historical experiences and among institutions. WES and GEM metrics offer a flexible means of incorporating discrete threshold-prioritizations and historic context into a set of standardized scoring metrics. Together, they provide a practical approach for incorporating big data into clinical practice for treatment plan evaluations.
Kim, Seongsoon; Park, Donghyeon; Choi, Yonghwa; Lee, Kyubum; Kim, Byounggun; Jeon, Minji; Kim, Jihye; Tan, Aik Choon; Kang, Jaewoo
2018-01-05
With the development of artificial intelligence (AI) technology centered on deep-learning, the computer has evolved to a point where it can read a given text and answer a question based on the context of the text. Such a specific task is known as the task of machine comprehension. Existing machine comprehension tasks mostly use datasets of general texts, such as news articles or elementary school-level storybooks. However, no attempt has been made to determine whether an up-to-date deep learning-based machine comprehension model can also process scientific literature containing expert-level knowledge, especially in the biomedical domain. This study aims to investigate whether a machine comprehension model can process biomedical articles as well as general texts. Since there is no dataset for the biomedical literature comprehension task, our work includes generating a large-scale question answering dataset using PubMed and manually evaluating the generated dataset. We present an attention-based deep neural model tailored to the biomedical domain. To further enhance the performance of our model, we used a pretrained word vector and biomedical entity type embedding. We also developed an ensemble method of combining the results of several independent models to reduce the variance of the answers from the models. The experimental results showed that our proposed deep neural network model outperformed the baseline model by more than 7% on the new dataset. We also evaluated human performance on the new dataset. The human evaluation result showed that our deep neural model outperformed humans in comprehension by 22% on average. In this work, we introduced a new task of machine comprehension in the biomedical domain using a deep neural model. Since there was no large-scale dataset for training deep neural models in the biomedical domain, we created the new cloze-style datasets Biomedical Knowledge Comprehension Title (BMKC_T) and Biomedical Knowledge Comprehension Last Sentence (BMKC_LS) (together referred to as BioMedical Knowledge Comprehension) using the PubMed corpus. The experimental results showed that the performance of our model is much higher than that of humans. We observed that our model performed consistently better regardless of the degree of difficulty of a text, whereas humans have difficulty when performing biomedical literature comprehension tasks that require expert level knowledge. ©Seongsoon Kim, Donghyeon Park, Yonghwa Choi, Kyubum Lee, Byounggun Kim, Minji Jeon, Jihye Kim, Aik Choon Tan, Jaewoo Kang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 05.01.2018.
Supporting Regularized Logistic Regression Privately and Efficiently.
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Covariations in ecological scaling laws fostered by community dynamics.
Zaoli, Silvia; Giometto, Andrea; Maritan, Amos; Rinaldo, Andrea
2017-10-03
Scaling laws in ecology, intended both as functional relationships among ecologically relevant quantities and the probability distributions that characterize their occurrence, have long attracted the interest of empiricists and theoreticians. Empirical evidence exists of power laws associated with the number of species inhabiting an ecosystem, their abundances, and traits. Although their functional form appears to be ubiquitous, empirical scaling exponents vary with ecosystem type and resource supply rate. The idea that ecological scaling laws are linked has been entertained before, but the full extent of macroecological pattern covariations, the role of the constraints imposed by finite resource supply, and a comprehensive empirical verification are still unexplored. Here, we propose a theoretical scaling framework that predicts the linkages of several macroecological patterns related to species' abundances and body sizes. We show that such a framework is consistent with the stationary-state statistics of a broad class of resource-limited community dynamics models, regardless of parameterization and model assumptions. We verify predicted theoretical covariations by contrasting empirical data and provide testable hypotheses for yet unexplored patterns. We thus place the observed variability of ecological scaling exponents into a coherent statistical framework where patterns in ecology embed constrained fluctuations.
Supporting Regularized Logistic Regression Privately and Efficiently
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738
Guided Comprehension in the Primary Grades.
ERIC Educational Resources Information Center
McLaughlin, Maureen
Intended as a response to recent developments in reading research and a demand by primary-grade teachers for a comprehension-based instructional framework, this book adapts the Guided Comprehension Model introduced in the author/educator's book "Guided Comprehension: A Teaching Model for Grades 3-8." According to the book, the Guided…
Comprehensive School Reform and Achievement: A Meta-Analysis. Educator's Summary
ERIC Educational Resources Information Center
Center for Data-Driven Reform in Education (NJ3), 2008
2008-01-01
Which comprehensive school reform programs have been proven to help elementary and secondary students achieve? To find out, this review summarizes evidence on comprehensive school reform (CSR) models in elementary and secondary schools. Comprehensive school reform models are programs used schoolwide to improve student achievement. They typically…
The Comprehension and Validation of Social Information.
ERIC Educational Resources Information Center
Wyer, Robert S., Jr.; Radvansky, Gabriel A.
1999-01-01
Proposes a theory of social cognition to account for the comprehension and verification of social information. The theory views comprehension as a process of constructing situation models of new information on the basis of previously formed models about its referents. The comprehension of both single statements and multiple pieces of information…
The nuclear Thomas-Fermi model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, W.D.; Swiatecki, W.J.
1994-08-01
The statistical Thomas-Fermi model is applied to a comprehensive survey of macroscopic nuclear properties. The model uses a Seyler-Blanchard effective nucleon-nucleon interaction, generalized by the addition of one momentum-dependent and one density-dependent term. The adjustable parameters of the interaction were fitted to shell-corrected masses of 1654 nuclei, to the diffuseness of the nuclear surface and to the measured depths of the optical model potential. With these parameters nuclear sizes are well reproduced, and only relatively minor deviations between measured and calculated fission barriers of 36 nuclei are found. The model determines the principal bulk and surface properties of nuclear mattermore » and provides estimates for the more subtle, Droplet Model, properties. The predicted energy vs density relation for neutron matter is in striking correspondence with the 1981 theoretical estimate of Friedman and Pandharipande. Other extreme situations to which the model is applied are a study of Sn isotopes from {sup 82}Sn to {sup 170}Sn, and the rupture into a bubble configuration of a nucleus (constrained to spherical symmetry) which takes place when Z{sup 2}/A exceeds about 100.« less
The Nuclear Thomas-Fermi Model
DOE R&D Accomplishments Database
Myers, W. D.; Swiatecki, W. J.
1994-08-01
The statistical Thomas-Fermi model is applied to a comprehensive survey of macroscopic nuclear properties. The model uses a Seyler-Blanchard effective nucleon-nucleon interaction, generalized by the addition of one momentum-dependent and one density-dependent term. The adjustable parameters of the interaction were fitted to shell-corrected masses of 1654 nuclei, to the diffuseness of the nuclear surface and to the measured depths of the optical model potential. With these parameters nuclear sizes are well reproduced, and only relatively minor deviations between measured and calculated fission barriers of 36 nuclei are found. The model determines the principal bulk and surface properties of nuclear matter and provides estimates for the more subtle, Droplet Model, properties. The predicted energy vs density relation for neutron matter is in striking correspondence with the 1981 theoretical estimate of Friedman and Pandharipande. Other extreme situations to which the model is applied are a study of Sn isotopes from {sup 82}Sn to {sup 170}Sn, and the rupture into a bubble configuration of a nucleus (constrained to spherical symmetry) which takes place when Z{sup 2}/A exceeds about 100.
Tighe, Elizabeth L.; Schatschneider, Christopher
2015-01-01
The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in Adult Basic Education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological awareness and vocabulary knowledge at multiple points (quantiles) along the continuous distribution of reading comprehension. To demonstrate the efficacy of our multiple quantile regression analysis, we compared and contrasted our results with a traditional multiple regression analytic approach. Our results indicated that morphological awareness and vocabulary knowledge accounted for a large portion of the variance (82-95%) in reading comprehension skills across all quantiles. Morphological awareness exhibited the greatest unique predictive ability at lower levels of reading comprehension whereas vocabulary knowledge exhibited the greatest unique predictive ability at higher levels of reading comprehension. These results indicate the utility of using multiple quantile regression to assess trajectories of component skills across multiple levels of reading comprehension. The implications of our findings for ABE programs are discussed. PMID:25351773
A comprehensive approach for the evaluation and comparison of emission inventories in Madrid
NASA Astrophysics Data System (ADS)
Vedrenne, Michel; Borge, Rafael; Lumbreras, Julio; Rodríguez, María Encarnación; de la Paz, David; Pérez, Javier; Manuel de Andrés, Juan; Quaassdorff, Christina
2016-11-01
Emission inventories provide a description of the polluting activities that occur across a specific geographic domain, and are widely used as input for air quality modelling for the assessment of compliance with environmental legislation. The spatial scale to which these inventories are referred has an influence in the representativeness of the emission estimates, as these are underpinned by a number of considerations and data with different levels of granularity. This study proposes a comprehensive framework for the evaluation of emission inventories that allows identifying methodological issues by examining differences in performance to a chemical transport model (CTM) when such inventories are used as input. To demonstrate the approach, a comparison between the national and regional emissions inventories for the Autonomous Community of Madrid (ACM) was carried out (NEI and REI respectively). The analysis revealed discrepancies in compilation methodologies for the domestic sector (SNAP 02), industrial combustion (SNAP 03), road traffic (SNAP 07) and other mobile sources (SNAP 08); most of the differences were originally caused by taking into account different activity variables, fuel mixes, and spatial disaggregation and allocation proxies. The granularity of the base data (statistics, fuel consumption, facilities, etc.) proved to be an essential limiting factor, which means that whenever bottom-up approaches were followed, the description of emission sectors tended to be more accurate.
Swetha, Jonnalagadda Laxmi; Arpita, Ramisetti; Srikanth, Chintalapani; Nutalapati, Rajasekhar
2014-01-01
Biostatistics is an integral part of research protocols. In any field of inquiry or investigation, data obtained is subsequently classified, analyzed and tested for accuracy by statistical methods. Statistical analysis of collected data, thus, forms the basis for all evidence-based conclusions. The aim of this study is to evaluate the cognition, comprehension and application of biostatistics in research among post graduate students in Periodontics, in India. A total of 391 post graduate students registered for a master's course in periodontics at various dental colleges across India were included in the survey. Data regarding the level of knowledge, understanding and its application in design and conduct of the research protocol was collected using a dichotomous questionnaire. A descriptive statistics was used for data analysis. Nearly 79.2% students were aware of the importance of biostatistics in research, 55-65% were familiar with MS-EXCEL spreadsheet for graphical representation of data and with the statistical softwares available on the internet, 26.0% had biostatistics as mandatory subject in their curriculum, 9.5% tried to perform statistical analysis on their own while 3.0% were successful in performing statistical analysis of their studies on their own. Biostatistics should play a central role in planning, conduct, interim analysis, final analysis and reporting of periodontal research especially by the postgraduate students. Indian postgraduate students in periodontics are aware of the importance of biostatistics in research but the level of understanding and application is still basic and needs to be addressed.
The application of latent curve analysis to testing developmental theories in intervention research.
Curran, P J; Muthén, B O
1999-08-01
The effectiveness of a prevention or intervention program has traditionally been assessed using time-specific comparisons of mean levels between the treatment and the control groups. However, many times the behavior targeted by the intervention is naturally developing over time, and the goal of the treatment is to alter this natural or normative developmental trajectory. Examining time-specific mean levels can be both limiting and potentially misleading when the behavior of interest is developing systematically over time. It is argued here that there are both theoretical and statistical advantages associated with recasting intervention treatment effects in terms of normative and altered developmental trajectories. The recently developed technique of latent curve (LC) analysis is reviewed and extended to a true experimental design setting in which subjects are randomly assigned to a treatment intervention or a control condition. LC models are applied to both artificially generated and real intervention data sets to evaluate the efficacy of an intervention program. Not only do the LC models provide a more comprehensive understanding of the treatment and control group developmental processes compared to more traditional fixed-effects models, but LC models have greater statistical power to detect a given treatment effect. Finally, the LC models are modified to allow for the computation of specific power estimates under a variety of conditions and assumptions that can provide much needed information for the planning and design of more powerful but cost-efficient intervention programs for the future.
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund; ...
2018-03-28
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
Consistent Partial Least Squares Path Modeling via Regularization
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present. PMID:29515491
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
Lopopolo, Alessandro; Frank, Stefan L; van den Bosch, Antal; Willems, Roel M
2017-01-01
Language comprehension involves the simultaneous processing of information at the phonological, syntactic, and lexical level. We track these three distinct streams of information in the brain by using stochastic measures derived from computational language models to detect neural correlates of phoneme, part-of-speech, and word processing in an fMRI experiment. Probabilistic language models have proven to be useful tools for studying how language is processed as a sequence of symbols unfolding in time. Conditional probabilities between sequences of words are at the basis of probabilistic measures such as surprisal and perplexity which have been successfully used as predictors of several behavioural and neural correlates of sentence processing. Here we computed perplexity from sequences of words and their parts of speech, and their phonemic transcriptions. Brain activity time-locked to each word is regressed on the three model-derived measures. We observe that the brain keeps track of the statistical structure of lexical, syntactic and phonological information in distinct areas.
Sörqvist, Patrik; Hurtig, Anders; Ljung, Robert; Rönnberg, Jerker
2014-01-01
The purpose of this experiment was to investigate whether classroom reverberation influences second-language (L2) listening comprehension. Moreover, we investigated whether individual differences in baseline L2 proficiency and in working memory capacity (WMC) modulate the effect of reverberation time on L2 listening comprehension. The results showed that L2 listening comprehension decreased as reverberation time increased. Participants with higher baseline L2 proficiency were less susceptible to this effect. WMC was also related to the effect of reverberation (although just barely significant), but the effect of WMC was eliminated when baseline L2 proficiency was statistically controlled. Taken together, the results suggest that top-down cognitive capabilities support listening in adverse conditions. Potential implications for the Swedish national tests in English are discussed. PMID:24646043
Identifying and Investigating Unexpected Response to Treatment: A Diabetes Case Study.
Ozery-Flato, Michal; Ein-Dor, Liat; Parush-Shear-Yashuv, Naama; Aharonov, Ranit; Neuvirth, Hani; Kohn, Martin S; Hu, Jianying
2016-09-01
The availability of electronic health records creates fertile ground for developing computational models of various medical conditions. We present a new approach for detecting and analyzing patients with unexpected responses to treatment, building on machine learning and statistical methodology. Given a specific patient, we compute a statistical score for the deviation of the patient's response from responses observed in other patients having similar characteristics and medication regimens. These scores are used to define cohorts of patients showing deviant responses. Statistical tests are then applied to identify clinical features that correlate with these cohorts. We implement this methodology in a tool that is designed to assist researchers in the pharmaceutical field to uncover new features associated with reduced response to a treatment. It can also aid physicians by flagging patients who are not responding to treatment as expected and hence deserve more attention. The tool provides comprehensive visualizations of the analysis results and the supporting data, both at the cohort level and at the level of individual patients. We demonstrate the utility of our methodology and tool in a population of type II diabetic patients, treated with antidiabetic drugs, and monitored by the HbA1C test.
Prodinger, Birgit; Ballert, Carolina S; Brach, Mirjam; Brinkhof, Martin W G; Cieza, Alarcos; Hug, Kerstin; Jordan, Xavier; Post, Marcel W M; Scheel-Sailer, Anke; Schubert, Martin; Tennant, Alan; Stucki, Gerold
2016-02-01
Functioning is an important outcome to measure in cohort studies. Clear and operational outcomes are needed to judge the quality of a cohort study. This paper outlines guiding principles for reporting functioning in cohort studies and addresses some outstanding issues. Principles of how to standardize reporting of data from a cohort study on functioning, by deriving scores that are most useful for further statistical analysis and reporting, are outlined. The Swiss Spinal Cord Injury Cohort Study Community Survey serves as a case in point to provide a practical application of these principles. Development of reporting scores must be conceptually coherent and metrically sound. The International Classification of Functioning, Disability and Health (ICF) can serve as the frame of reference for this, with its categories serving as reference units for reporting. To derive a score for further statistical analysis and reporting, items measuring a single latent trait must be invariant across groups. The Rasch measurement model is well suited to test these assumptions. Our approach is a valuable guide for researchers and clinicians, as it fosters comparability of data, strengthens the comprehensiveness of scope, and provides invariant, interval-scaled data for further statistical analyses of functioning.
Swanson, H L; Trahan, M
1996-09-01
The present study investigates (a) whether learning disabled readers' working memory deficits that underlie poor reading comprehension are related to a general system, and (b) whether metacognition contributes to comprehension beyond what is predicted by working memory and word knowledge. To this end, performance between learning and disabled (N = 60) and average readers (N = 60) was compared on the reading comprehension, reading rate, and vocabulary subtests of the Nelson Skills Reading Test, Sentence Span test composed of high and low imagery words, and a Metacognitive Questionnaire. As expected, differences between groups in working memory, vocabulary, and reading measures emerged, whereas ability groups were statistically comparable on the Metacognitive Questionnaire. A within-group analysis indicated that the correlation patterns between working memory, vocabulary, metacognition, and reading comprehension were not the same between ability groups. For predicting reading comprehension, the metacognitive questionnaire best predicted learning disabled readers' performance, whereas the working memory span measure that included low-imagery words best predicted average achieving readers' comprehension. Overall, the results suggest that the relationship between learning disabled readers' generalised working memory deficits and poor reading comprehension may be mediated by metacognition.
ERIC Educational Resources Information Center
Library of Congress, Washington, DC. Congressional Research Service.
This handbook contains a comprehensive selection of United States and foreign energy statistics in the form of graphs and tables. The data are classified according to resources, production, consumption and demand, energy and gross national product, and research and development. Statistics on energy sources such as coal, oil, gas, nuclear energy,…
ERIC Educational Resources Information Center
Rahim, Syed A.
Based in part on a list developed by the United Nations Educational, Scientific, and Cultural Organization (UNESCO) for use in Afghanistan, this document presents a comprehensive checklist of items of statistical and descriptive data required for planning a national communication system. It is noted that such a system provides the vital…
An Evaluation of a Testing Model for Listening Comprehension.
ERIC Educational Resources Information Center
Kangli, Ji
A model for testing listening comprehension in English as a Second Language is discussed and compared with the Test for English Majors (TEM). The model in question incorporates listening for: (1) understanding factual information; (2) comprehension and interpretation; (3) detailed and selective information; (4) global ideas; (5) on-line tasks…
Testing and Refining the Direct and Inferential Mediation Model of Reading Comprehension
ERIC Educational Resources Information Center
Cromley, Jennifer G.; Azevedo, Roger
2007-01-01
A significant proportion of American high school students struggle with reading comprehension. Theoretical models of reading comprehension might help researchers understand these difficulties, because they can point to variables that make the largest contributions to comprehension. On the basis of an extensive review of the literature, we created…
Towards a comprehensive city emission function (CCEF)
NASA Astrophysics Data System (ADS)
Kocifaj, Miroslav
2018-01-01
The comprehensive city emission function (CCEF) is developed for a heterogeneous light-emitting or blocking urban environments, embracing any combination of input parameters that characterize linear dimensions in the system (size and distances between buildings or luminaires), properties of light-emitting elements (such as luminous building façades and street lighting), ground reflectance and total uplight-fraction, all of these defined for an arbitrarily sized 2D area. The analytical formula obtained is not restricted to a single model class as it can capture any specific light-emission feature for wide range of cities. The CCEF method is numerically fast in contrast to what can be expected of other probabilistic approaches that rely on repeated random sampling. Hence the present solution has great potential in light-pollution modeling and can be included in larger numerical models. Our theoretical findings promise great progress in light-pollution modeling as this is the first time an analytical solution to city emission function (CEF) has been developed that depends on statistical mean size and height of city buildings, inter-building separation, prevailing heights of light fixtures, lighting density, and other factors such as e.g. luminaire light output and light distribution, including the amount of uplight, and representative city size. The model is validated for sensitivity and specificity pertinent to combinations of input parameters in order to test its behavior under various conditions, including those that can occur in complex urban environments. It is demonstrated that the solution model succeeds in reproducing a light emission peak at some elevated zenith angles and is consistent with reduced rather than enhanced emission in directions nearly parallel to the ground.
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
A generalized plate method for estimating total aerobic microbial count.
Ho, Kai Fai
2004-01-01
The plate method outlined in Chapter 61: Microbial Limit Tests of the U.S. Pharmacopeia (USP 61) provides very specific guidance for assessing total aerobic bioburden in pharmaceutical articles. This methodology, while comprehensive, lacks the flexibility to be useful in all situations. By studying the plate method as a special case within a more general family of assays, the effects of each parameter in the guidance can be understood. Using a mathematical model to describe the plate counting procedure, a statistical framework for making more definitive statements about total aerobic bioburden is developed. Such a framework allows the laboratory scientist to adjust the USP 61 methods to satisfy specific practical constraints. In particular, it is shown that the plate method can be conducted, albeit with stricter acceptance criteria, using a test specimen quantity that is smaller than the 10 g or 10 mL prescribed in the guidance. Finally, the interpretation of results proffered by the guidance is re-examined within this statistical framework and shown to be overly aggressive.
A Vignette (User's Guide) for “An R Package for Statistical ...
StatCharrms is a graphical user front-end for ease of use in analyzing data generated from OCSPP 890.2200, Medaka Extended One Generation Reproduction Test (MEOGRT) and OCSPP 890.2300, Larval Amphibian Gonad Development Assay (LAGDA). The analyses StatCharrms is capable of performing are: Rao-Scott adjusted Cochran-Armitage test for trend By Slices (RSCABS), a Standard Cochran-Armitage test for trend By Slices (SCABS), mixed effects Cox proportional model, Jonckheere-Terpstra step down trend test, Dunn test, one way ANOVA, weighted ANOVA, mixed effects ANOVA, repeated measures ANOVA, and Dunnett test. This document provides a User’s Manual (termed a Vignette by the Comprehensive R Archive Network (CRAN)) for the previously created R-code tool called StatCharrms (Statistical analysis of Chemistry, Histopathology, and Reproduction endpoints using Repeated measures and Multi-generation Studies). The StatCharrms R-code has been publically available directly from EPA staff since the approval of OCSPP 890.2200 and 890.2300, and now is available publically available at the CRAN.
Meyer, Patrick E; Lafitte, Frédéric; Bontempi, Gianluca
2008-10-29
This paper presents the R/Bioconductor package minet (version 1.1.6) which provides a set of functions to infer mutual information networks from a dataset. Once fed with a microarray dataset, the package returns a network where nodes denote genes, edges model statistical dependencies between genes and the weight of an edge quantifies the statistical evidence of a specific (e.g transcriptional) gene-to-gene interaction. Four different entropy estimators are made available in the package minet (empirical, Miller-Madow, Schurmann-Grassberger and shrink) as well as four different inference methods, namely relevance networks, ARACNE, CLR and MRNET. Also, the package integrates accuracy assessment tools, like F-scores, PR-curves and ROC-curves in order to compare the inferred network with a reference one. The package minet provides a series of tools for inferring transcriptional networks from microarray data. It is freely available from the Comprehensive R Archive Network (CRAN) as well as from the Bioconductor website.
Rosen, G D
2006-06-01
Meta-analysis is a vague descriptor used to encompass very diverse methods of data collection analysis, ranging from simple averages to more complex statistical methods. Holo-analysis is a fully comprehensive statistical analysis of all available data and all available variables in a specified topic, with results expressed in a holistic factual empirical model. The objectives and applications of holo-analysis include software production for prediction of responses with confidence limits, translation of research conditions to praxis (field) circumstances, exposure of key missing variables, discovery of theoretically unpredictable variables and interactions, and planning future research. Holo-analyses are cited as examples of the effects on broiler feed intake and live weight gain of exogenous phytases, which account for 70% of variation in responses in terms of 20 highly significant chronological, dietary, environmental, genetic, managemental, and nutrient variables. Even better future accountancy of variation will be facilitated if and when authors of papers routinely provide key data for currently neglected variables, such as temperatures, complete feed formulations, and mortalities.
ERIC Educational Resources Information Center
Chapman, Robin S.; Hesketh, Linda J.; Kistler, Doris J.
2002-01-01
Longitudinal change in syntax comprehension and production skill, measured over six years, was modeled in 31 individuals (ages 5-20) with Down syndrome. The best fitting Hierarchical Linear Modeling model of comprehension uses age and visual and auditory short-term memory as predictors of initial status, and age for growth trajectory. (Contains…
Rixen, M.; Ferreira-Coelho, E.; Signell, R.
2008-01-01
Despite numerous and regular improvements in underlying models, surface drift prediction in the ocean remains a challenging task because of our yet limited understanding of all processes involved. Hence, deterministic approaches to the problem are often limited by empirical assumptions on underlying physics. Multi-model hyper-ensemble forecasts, which exploit the power of an optimal local combination of available information including ocean, atmospheric and wave models, may show superior forecasting skills when compared to individual models because they allow for local correction and/or bias removal. In this work, we explore in greater detail the potential and limitations of the hyper-ensemble method in the Adriatic Sea, using a comprehensive surface drifter database. The performance of the hyper-ensembles and the individual models are discussed by analyzing associated uncertainties and probability distribution maps. Results suggest that the stochastic method may reduce position errors significantly for 12 to 72??h forecasts and hence compete with pure deterministic approaches. ?? 2007 NATO Undersea Research Centre (NURC).
Skill Assessment in Ocean Biological Data Assimilation
NASA Technical Reports Server (NTRS)
Gregg, Watson W.; Friedrichs, Marjorie A. M.; Robinson, Allan R.; Rose, Kenneth A.; Schlitzer, Reiner; Thompson, Keith R.; Doney, Scott C.
2008-01-01
There is growing recognition that rigorous skill assessment is required to understand the ability of ocean biological models to represent ocean processes and distributions. Statistical analysis of model results with observations represents the most quantitative form of skill assessment, and this principle serves as well for data assimilation models. However, skill assessment for data assimilation requires special consideration. This is because there are three sets of information in the free-run model, data, and the assimilation model, which uses Data assimilation information from both the flee-run model and the data. Intercom parison of results among the three sets of information is important and useful for assessment, but is not conclusive since the three information sets are intertwined. An independent data set is necessary for an objective determination. Other useful measures of ocean biological data assimilation assessment include responses of unassimilated variables to the data assimilation, performance outside the prescribed region/time of interest, forecasting, and trend analysis. Examples of each approach from the literature are provided. A comprehensive list of ocean biological data assimilation and their applications of skill assessment, in both ecosystem/biogeochemical and fisheries efforts, is summarized.
Survey data and metadata modelling using document-oriented NoSQL
NASA Astrophysics Data System (ADS)
Rahmatuti Maghfiroh, Lutfi; Gusti Bagus Baskara Nugraha, I.
2018-03-01
Survey data that are collected from year to year have metadata change. However it need to be stored integratedly to get statistical data faster and easier. Data warehouse (DW) can be used to solve this limitation. However there is a change of variables in every period that can not be accommodated by DW. Traditional DW can not handle variable change via Slowly Changing Dimension (SCD). Previous research handle the change of variables in DW to manage metadata by using multiversion DW (MVDW). MVDW is designed using relational model. Some researches also found that developing nonrelational model in NoSQL database has reading time faster than the relational model. Therefore, we propose changes to metadata management by using NoSQL. This study proposes a model DW to manage change and algorithms to retrieve data with metadata changes. Evaluation of the proposed models and algorithms result in that database with the proposed design can retrieve data with metadata changes properly. This paper has contribution in comprehensive data analysis with metadata changes (especially data survey) in integrated storage.
NASA Astrophysics Data System (ADS)
Knopoff, Damián A.
2016-09-01
The recent review paper [4] constitutes a valuable contribution on the understanding, modeling and simulation of crowd dynamics in extreme situations. It provides a very comprehensive revision about the complexity features of the system under consideration, scaling and the consequent justification of the used methods. In particular, macro and microscopic models have so far been used to model crowd dynamics [9] and authors appropriately explain that working at the mesoscale is a good choice to deal with the heterogeneous behaviour of walkers as well as with the difficulty of their deterministic identification. In this way, methods based on the kinetic theory and statistical dynamics are employed, more precisely the so-called kinetic theory for active particles [7]. This approach has successfully been applied in the modeling of several complex dynamics, with recent applications to learning [2,8] that constitutes the key to understand communication and is of great importance in social dynamics and behavioral sciences.
NASA Astrophysics Data System (ADS)
Shafii, M.; Tolson, B.; Matott, L. S.
2012-04-01
Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.
Forecasting influenza in Hong Kong with Google search queries and statistical model fusion.
Xu, Qinneng; Gel, Yulia R; Ramirez Ramirez, L Leticia; Nezafati, Kusha; Zhang, Qingpeng; Tsui, Kwok-Leung
2017-01-01
The objective of this study is to investigate predictive utility of online social media and web search queries, particularly, Google search data, to forecast new cases of influenza-like-illness (ILI) in general outpatient clinics (GOPC) in Hong Kong. To mitigate the impact of sensitivity to self-excitement (i.e., fickle media interest) and other artifacts of online social media data, in our approach we fuse multiple offline and online data sources. Four individual models: generalized linear model (GLM), least absolute shrinkage and selection operator (LASSO), autoregressive integrated moving average (ARIMA), and deep learning (DL) with Feedforward Neural Networks (FNN) are employed to forecast ILI-GOPC both one week and two weeks in advance. The covariates include Google search queries, meteorological data, and previously recorded offline ILI. To our knowledge, this is the first study that introduces deep learning methodology into surveillance of infectious diseases and investigates its predictive utility. Furthermore, to exploit the strength from each individual forecasting models, we use statistical model fusion, using Bayesian model averaging (BMA), which allows a systematic integration of multiple forecast scenarios. For each model, an adaptive approach is used to capture the recent relationship between ILI and covariates. DL with FNN appears to deliver the most competitive predictive performance among the four considered individual models. Combing all four models in a comprehensive BMA framework allows to further improve such predictive evaluation metrics as root mean squared error (RMSE) and mean absolute predictive error (MAPE). Nevertheless, DL with FNN remains the preferred method for predicting locations of influenza peaks. The proposed approach can be viewed a feasible alternative to forecast ILI in Hong Kong or other countries where ILI has no constant seasonal trend and influenza data resources are limited. The proposed methodology is easily tractable and computationally efficient.
Rapid Expectation Adaptation during Syntactic Comprehension
Fine, Alex B.; Jaeger, T. Florian; Farmer, Thomas A.; Qian, Ting
2013-01-01
When we read or listen to language, we are faced with the challenge of inferring intended messages from noisy input. This challenge is exacerbated by considerable variability between and within speakers. Focusing on syntactic processing (parsing), we test the hypothesis that language comprehenders rapidly adapt to the syntactic statistics of novel linguistic environments (e.g., speakers or genres). Two self-paced reading experiments investigate changes in readers’ syntactic expectations based on repeated exposure to sentences with temporary syntactic ambiguities (so-called “garden path sentences”). These sentences typically lead to a clear expectation violation signature when the temporary ambiguity is resolved to an a priori less expected structure (e.g., based on the statistics of the lexical context). We find that comprehenders rapidly adapt their syntactic expectations to converge towards the local statistics of novel environments. Specifically, repeated exposure to a priori unexpected structures can reduce, and even completely undo, their processing disadvantage (Experiment 1). The opposite is also observed: a priori expected structures become less expected (even eliciting garden paths) in environments where they are hardly ever observed (Experiment 2). Our findings suggest that, when changes in syntactic statistics are to be expected (e.g., when entering a novel environment), comprehenders can rapidly adapt their expectations, thereby overcoming the processing disadvantage that mistaken expectations would otherwise cause. Our findings take a step towards unifying insights from research in expectation-based models of language processing, syntactic priming, and statistical learning. PMID:24204909
2000-08-01
lecturer of LATIN 2006 , (Latin America Theoretical Informat- ics, 2006 ), Valdivia , Chile, March 2006 . 67. Sergio Verdu gave a Keynote Talk at the New...NUMBER OF PAGES 20. LIMITATION OF ABSTRACT UL - 31-Jan- 2006 Data Fusion in Large Arrays of Microsensors (SensorWeb): A Comprehensive Approach to...Transactions on Wireless Communications, February 2006 . 21. A.P. George, W.B. Powell, S.R. Kulkarni. The Statistics of Hierarchical Aggregation for
Knowledge-Based Environmental Context Modeling
NASA Astrophysics Data System (ADS)
Pukite, P. R.; Challou, D. J.
2017-12-01
As we move from the oil-age to an energy infrastructure based on renewables, the need arises for new educational tools to support the analysis of geophysical phenomena and their behavior and properties. Our objective is to present models of these phenomena to make them amenable for incorporation into more comprehensive analysis contexts. Starting at the level of a college-level computer science course, the intent is to keep the models tractable and therefore practical for student use. Based on research performed via an open-source investigation managed by DARPA and funded by the Department of Interior [1], we have adapted a variety of physics-based environmental models for a computer-science curriculum. The original research described a semantic web architecture based on patterns and logical archetypal building-blocks (see figure) well suited for a comprehensive environmental modeling framework. The patterns span a range of features that cover specific land, atmospheric and aquatic domains intended for engineering modeling within a virtual environment. The modeling engine contained within the server relied on knowledge-based inferencing capable of supporting formal terminology (through NASA JPL's Semantic Web for Earth and Environmental Technology (SWEET) ontology and a domain-specific language) and levels of abstraction via integrated reasoning modules. One of the key goals of the research was to simplify models that were ordinarily computationally intensive to keep them lightweight enough for interactive or virtual environment contexts. The breadth of the elements incorporated is well-suited for learning as the trend toward ontologies and applying semantic information is vital for advancing an open knowledge infrastructure. As examples of modeling, we have covered such geophysics topics as fossil-fuel depletion, wind statistics, tidal analysis, and terrain modeling, among others. Techniques from the world of computer science will be necessary to promote efficient use of our renewable natural resources. [1] C2M2L (Component, Context, and Manufacturing Model Library) Final Report, https://doi.org/10.13140/RG.2.1.4956.3604
ERIC Educational Resources Information Center
Campbell, Robert E.; And Others
This handbook presents management techniques, program ideas, and student activities for building comprehensive secondary career guidance programs. Part 1 (chapter 1) traces the history of guidance to set the stage for the current emphasis on comprehensive programs, summarizes four representative models for designing comprehensive programs, and…
ERIC Educational Resources Information Center
Wagner, Richard K.; Herrera, Sarah K.; Spencer, Mercedes; Quinn, Jamie M.
2015-01-01
Recently, Tunmer and Chapman provided an alternative model of how decoding and listening comprehension affect reading comprehension that challenges the simple view of reading. They questioned the simple view's fundamental assumption that oral language comprehension and decoding make independent contributions to reading comprehension by arguing…
Geytenbeek, Joke J M; Vermeulen, R Jeroen; Becher, Jules G; Oostrom, Kim J
2015-03-01
To assess spoken language comprehension in non-speaking children with severe cerebral palsy (CP) and to explore possible associations with motor type and disability. Eighty-seven non-speaking children (44 males, 43 females, mean age 6y 8mo, SD 2y 1mo) with spastic (54%) or dyskinetic (46%) CP (Gross Motor Function Classification System [GMFCS] levels IV [39%] and V [61%]) underwent spoken language comprehension assessment with the computer-based instrument for low motor language testing (C-BiLLT), a new and validated diagnostic instrument. A multiple linear regression model was used to investigate which variables explained the variation in C-BiLLT scores. Associations between spoken language comprehension abilities (expressed in z-score or age-equivalent score) and motor type of CP, GMFCS and Manual Ability Classification System (MACS) levels, gestational age, and epilepsy were analysed with Fisher's exact test. A p-value <0.05 was considered statistically significant. Chronological age, motor type, and GMFCS classification explained 33% (R=0.577, R(2) =0.33) of the variance in spoken language comprehension. Of the children aged younger than 6 years 6 months, 52.4% of the children with dyskinetic CP attained comprehension scores within the average range (z-score ≥-1.6) as opposed to none of the children with spastic CP. Of the children aged older than 6 years 6 months, 32% of the children with dyskinetic CP reached the highest achievable age-equivalent score compared to 4% of the children with spastic CP. No significant difference in disability was found between CP-related variables (MACS levels, gestational age, epilepsy), with the exception of GMFCS which showed a significant difference in children aged younger than 6 years 6 months (p=0.043). Despite communication disabilities in children with severe CP, particularly in dyskinetic CP, spoken language comprehension may show no or only moderate delay. These findings emphasize the importance of introducing alternative and/or augmentative communication devices from early childhood. © 2014 Mac Keith Press.
ERIC Educational Resources Information Center
Sadoski, Mark; And Others
1993-01-01
Presents and tests a theoretically derived causal model of the recall of sentences. Notes that the causal model identifies familiarity and concreteness as causes of comprehensibility; familiarity, concreteness, and comprehensibility as causes of interestingness; and all the identified variables as causes of both immediate and delayed recall.…
Teo, Guoshou; Kim, Sinae; Tsou, Chih-Chiang; Collins, Ben; Gingras, Anne-Claude; Nesvizhskii, Alexey I; Choi, Hyungwon
2015-11-03
Data independent acquisition (DIA) mass spectrometry is an emerging technique that offers more complete detection and quantification of peptides and proteins across multiple samples. DIA allows fragment-level quantification, which can be considered as repeated measurements of the abundance of the corresponding peptides and proteins in the downstream statistical analysis. However, few statistical approaches are available for aggregating these complex fragment-level data into peptide- or protein-level statistical summaries. In this work, we describe a software package, mapDIA, for statistical analysis of differential protein expression using DIA fragment-level intensities. The workflow consists of three major steps: intensity normalization, peptide/fragment selection, and statistical analysis. First, mapDIA offers normalization of fragment-level intensities by total intensity sums as well as a novel alternative normalization by local intensity sums in retention time space. Second, mapDIA removes outlier observations and selects peptides/fragments that preserve the major quantitative patterns across all samples for each protein. Last, using the selected fragments and peptides, mapDIA performs model-based statistical significance analysis of protein-level differential expression between specified groups of samples. Using a comprehensive set of simulation datasets, we show that mapDIA detects differentially expressed proteins with accurate control of the false discovery rates. We also describe the analysis procedure in detail using two recently published DIA datasets generated for 14-3-3β dynamic interaction network and prostate cancer glycoproteome. The software was written in C++ language and the source code is available for free through SourceForge website http://sourceforge.net/projects/mapdia/.This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.
Relational Care for Perinatal Substance Use: A Systematic Review.
Kramlich, Debra; Kronk, Rebecca
2015-01-01
The purpose of this systematic review of the literature is to highlight published studies of perinatal substance use disorder that address relational aspects of various care delivery models to identify opportunities for future studies in this area. Quantitative, qualitative, and mixed-methods studies that included relational variables, such as healthcare provider engagement with pregnant women and facilitation of maternal-infant bonding, were identified using PubMed, Scopus, and EBSCO databases. Key words included neonatal abstinence syndrome, drug, opioid, substance, dependence, and pregnancy. Six studies included in this review identified statistically and/or clinically significant positive maternal and neonatal outcomes thought to be linked to engagement in antenatal care and development of caring relationships with healthcare providers. Comprehensive, integrated multidisciplinary services for pregnant women with substance use disorder aimed at harm reduction are showing positive results. Evidence exists that pregnant women's engagement with comprehensive services facilitated by caring relationships with healthcare providers may improve perinatal outcomes. Gaps in the literature remain; studies have yet to identify the relative contribution of multiple risk factors to adverse outcomes as well as program components most likely to improve outcomes.
On the Yakhot-Orszag renormalization group method for deriving turbulence statistics and models
NASA Technical Reports Server (NTRS)
Smith, L. M.; Reynolds, W. C.
1992-01-01
An independent, comprehensive, critical review of the 'renormalization group' (RNG) theory of turbulence developed by Yakhot and Orszag (1986) is provided. Their basic theory for the Navier-Stokes equations is confirmed, and approximations in the scale removal procedure are discussed. The YO derivations of the velocity-derivative skewness and the transport equation for the energy dissipation rate are examined. An algebraic error in the derivation of the skewness is corrected. The corrected RNG skewness value of -0.59 is in agreement with experiments at moderate Reynolds numbers. Several problems are identified in the derivation of the energy dissipation rate equations which suggest that the derivation should be reformulated.
Engel, Christoph; Hamann, Hanjo
2016-01-01
The (German) market for law professors fulfils the conditions for a hog cycle: In the short run, supply cannot be extended or limited; future law professors must be hired soon after they first present themselves, or leave the market; demand is inelastic. Using a comprehensive German dataset, we show that the number of market entries today is negatively correlated with the number of market entries eight years ago. This suggests short-sighted behavior of young scholars at the time when they decide to prepare for the market. Using our statistical model, we make out-of-sample predictions for the German academic market in law until 2020.
Evaluation of WRF Parameterizations for Air Quality Applications over the Midwest USA
NASA Astrophysics Data System (ADS)
Zheng, Z.; Fu, K.; Balasubramanian, S.; Koloutsou-Vakakis, S.; McFarland, D. M.; Rood, M. J.
2017-12-01
Reliable predictions from Chemical Transport Models (CTMs) for air quality research require accurate gridded weather inputs. In this study, a sensitivity analysis of 17 Weather Research and Forecast (WRF) model runs was conducted to explore the optimum configuration in six physics categories (i.e., cumulus, surface layer, microphysics, land surface model, planetary boundary layer, and longwave/shortwave radiation) for the Midwest USA. WRF runs were initally conducted over four days in May 2011 for a 12 km x 12 km domain over contiguous USA and a nested 4 km x 4 km domain over the Midwest USA (i.e., Illinois and adjacent areas including Iowa, Indiana, and Missouri). Model outputs were evaluated statistically by comparison with meteorological observations (DS337.0, METAR data, and the Water and Atmospheric Resources Monitoring Network) and resulting statistics were compared to benchmark values from the literature. Identified optimum configurations of physics parametrizations were then evaluated for the whole months of May and October 2011 to evaluate WRF model performance for Midwestern spring and fall seasons. This study demonstrated that for the chosen physics options, WRF predicted well temperature (Index of Agreement (IOA) = 0.99), pressure (IOA = 0.99), relative humidity (IOA = 0.93), wind speed (IOA = 0.85), and wind direction (IOA = 0.97). However, WRF did not predict daily precipitation satisfactorily (IOA = 0.16). Developed gridded weather fields will be used as inputs to a CTM ensemble consisting of the Comprehensive Air Quality Model with Extensions to study impacts of chemical fertilizer usage on regional air quality in the Midwest USA.
[The effectiveness of comprehensive rehabilitation after a first episode of ischemic stroke].
Starosta, Michał; Niwald, Marta; Miller, Elżbieta
2015-05-01
Ischemic stroke is the most common cause of hospitalization in the Department of Neurological Rehabilitation. Comprehensive rehabilitation is essential for regaining lost functional efficiency. The aim of study was to evaluate the effectiveness of specific disorder rehabilitation program in 57 patients with first-ever ischemic stroke. The study included 57 patients (27 women, 30 men) aged from 47 to 89. Patients were admitted for comprehensive rehabilitation, lasted an average of 25 days. The treatment program consisted of exercises aimed at reeducation of posture and gait. In addition, physical treatments were used. Evaluation of the effectiveness of rehabilitation was measured using the Activity Daily Living scale, Modified Rankin Scale, Rivermead Measure Assessment (RMA1-global movements, RMA2-lower limb and trunk, RMA3-upper limb) and the psychological tests - Geriatric Depression Scale (GDS) and Beck Depression Inventory (BDI). As a result of comprehensive rehabilitation treatment, functional status and mental health improvement was observed in relation to the ADL scale by 32% (woman 36%, man 30%), Rankin scale by 22% (woman 22%, man 21%). In the RMA, improvement was observed with the statistical significance of p=0.001 in all of the subscales. The highest rate of improvement affected upper limb function: RMA/3 (41%). In other subscales women have achieved statistically more significant improvement than men (RMA/1-43% versus 25%; RMA/2-41% versus 30%). The results related to the psychological assessment showed statistically significant GDS improvement p<0.001 (<60 years old) and BDI (> 60 years old) in test men (p=0.038). Spearman correlation coefficient showed no relation between mental state and functional improvement (GDS versus ADL; BDI versus ADL). The 25 days comprehensive rehabilitation program during the subacute stroke phase affects mainly the improvement of upper limb function. Women have achieved better functional improvement in all of the parameters. In addition, it was observed that symptoms of depression were presented in all study group, and the improvement of mental focused primarily on patients after 60 years old. © 2015 MEDPRESS.
Identifying customer-focused performance measures : final report 655.
DOT National Transportation Integrated Search
2010-10-01
The Arizona Department of Transportation (ADOT) completed a comprehensive customer satisfaction : assessment in July 2009. ADOT commissioned the assessment to acquire statistically valid data from residents : and community leaders to help it identify...
Planck 2015 results. XVII. Constraints on primordial non-Gaussianity
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Arroja, F.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Gauthier, C.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hamann, J.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Heavens, A.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huang, Z.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kim, J.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lacasa, F.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Lewis, A.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Marinucci, D.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Münchmeyer, M.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Peiris, H. V.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Racine, B.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Shiraishi, M.; Smith, K.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sutter, P.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Troja, A.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
The Planck full mission cosmic microwave background (CMB) temperature and E-mode polarization maps are analysed to obtain constraints on primordial non-Gaussianity (NG). Using three classes of optimal bispectrum estimators - separable template-fitting (KSW), binned, and modal - we obtain consistent values for the primordial local, equilateral, and orthogonal bispectrum amplitudes, quoting as our final result from temperature alone ƒlocalNL = 2.5 ± 5.7, ƒequilNL= -16 ± 70, , and ƒorthoNL = -34 ± 32 (68% CL, statistical). Combining temperature and polarization data we obtain ƒlocalNL = 0.8 ± 5.0, ƒequilNL= -4 ± 43, and ƒorthoNL = -26 ± 21 (68% CL, statistical). The results are based on comprehensive cross-validation of these estimators on Gaussian and non-Gaussian simulations, are stable across component separation techniques, pass an extensive suite of tests, and are consistent with estimators based on measuring the Minkowski functionals of the CMB. The effect of time-domain de-glitching systematics on the bispectrum is negligible. In spite of these test outcomes we conservatively label the results including polarization data as preliminary, owing to a known mismatch of the noise model in simulations and the data. Beyond estimates of individual shape amplitudes, we present model-independent, three-dimensional reconstructions of the Planck CMB bispectrum and derive constraints on early universe scenarios that generate primordial NG, including general single-field models of inflation, axion inflation, initial state modifications, models producing parity-violating tensor bispectra, and directionally dependent vector models. We present a wide survey of scale-dependent feature and resonance models, accounting for the "look elsewhere" effect in estimating the statistical significance of features. We also look for isocurvature NG, and find no signal, but we obtain constraints that improve significantly with the inclusion of polarization. The primordial trispectrum amplitude in the local model is constrained to be 𝓰localNL = (-0.9 ± 7.7 ) X 104(68% CL statistical), and we perform an analysis of trispectrum shapes beyond the local case. The global picture that emerges is one of consistency with the premises of the ΛCDM cosmology, namely that the structure we observe today was sourced by adiabatic, passive, Gaussian, and primordial seed perturbations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ade, P. A. R.; Aghanim, N.; Arnaud, M.
We report that the Planck full mission cosmic microwave background (CMB) temperature and E-mode polarization maps are analysed to obtain constraints on primordial non-Gaussianity (NG). Using three classes of optimal bispectrum estimators – separable template-fitting (KSW), binned, and modal – we obtain consistent values for the primordial local, equilateral, and orthogonal bispectrum amplitudes, quoting as our final result from temperature alone ƒ local NL = 2.5 ± 5.7, ƒ equil NL= -16 ± 70, , and ƒ ortho NL = -34 ± 32 (68% CL, statistical). Combining temperature and polarization data we obtain ƒ local NL = 0.8 ± 5.0,more » ƒ equil NL= -4 ± 43, and ƒ ortho NL = -26 ± 21 (68% CL, statistical). The results are based on comprehensive cross-validation of these estimators on Gaussian and non-Gaussian simulations, are stable across component separation techniques, pass an extensive suite of tests, and are consistent with estimators based on measuring the Minkowski functionals of the CMB. The effect of time-domain de-glitching systematics on the bispectrum is negligible. In spite of these test outcomes we conservatively label the results including polarization data as preliminary, owing to a known mismatch of the noise model in simulations and the data. Beyond estimates of individual shape amplitudes, we present model-independent, three-dimensional reconstructions of the Planck CMB bispectrum and derive constraints on early universe scenarios that generate primordial NG, including general single-field models of inflation, axion inflation, initial state modifications, models producing parity-violating tensor bispectra, and directionally dependent vector models. We present a wide survey of scale-dependent feature and resonance models, accounting for the “look elsewhere” effect in estimating the statistical significance of features. We also look for isocurvature NG, and find no signal, but we obtain constraints that improve significantly with the inclusion of polarization. The primordial trispectrum amplitude in the local model is constrained to be g local NL = (-0.9 ± 7.7 ) X 10 4(68% CL statistical), and we perform an analysis of trispectrum shapes beyond the local case. The global picture that emerges is one of consistency with the premises of the ΛCDM cosmology, namely that the structure we observe today was sourced by adiabatic, passive, Gaussian, and primordial seed perturbations.« less
Planck 2015 results: XVII. Constraints on primordial non-Gaussianity
Ade, P. A. R.; Aghanim, N.; Arnaud, M.; ...
2016-09-20
We report that the Planck full mission cosmic microwave background (CMB) temperature and E-mode polarization maps are analysed to obtain constraints on primordial non-Gaussianity (NG). Using three classes of optimal bispectrum estimators – separable template-fitting (KSW), binned, and modal – we obtain consistent values for the primordial local, equilateral, and orthogonal bispectrum amplitudes, quoting as our final result from temperature alone ƒ local NL = 2.5 ± 5.7, ƒ equil NL= -16 ± 70, , and ƒ ortho NL = -34 ± 32 (68% CL, statistical). Combining temperature and polarization data we obtain ƒ local NL = 0.8 ± 5.0,more » ƒ equil NL= -4 ± 43, and ƒ ortho NL = -26 ± 21 (68% CL, statistical). The results are based on comprehensive cross-validation of these estimators on Gaussian and non-Gaussian simulations, are stable across component separation techniques, pass an extensive suite of tests, and are consistent with estimators based on measuring the Minkowski functionals of the CMB. The effect of time-domain de-glitching systematics on the bispectrum is negligible. In spite of these test outcomes we conservatively label the results including polarization data as preliminary, owing to a known mismatch of the noise model in simulations and the data. Beyond estimates of individual shape amplitudes, we present model-independent, three-dimensional reconstructions of the Planck CMB bispectrum and derive constraints on early universe scenarios that generate primordial NG, including general single-field models of inflation, axion inflation, initial state modifications, models producing parity-violating tensor bispectra, and directionally dependent vector models. We present a wide survey of scale-dependent feature and resonance models, accounting for the “look elsewhere” effect in estimating the statistical significance of features. We also look for isocurvature NG, and find no signal, but we obtain constraints that improve significantly with the inclusion of polarization. The primordial trispectrum amplitude in the local model is constrained to be g local NL = (-0.9 ± 7.7 ) X 10 4(68% CL statistical), and we perform an analysis of trispectrum shapes beyond the local case. The global picture that emerges is one of consistency with the premises of the ΛCDM cosmology, namely that the structure we observe today was sourced by adiabatic, passive, Gaussian, and primordial seed perturbations.« less
Sources of Error and the Statistical Formulation of M S: m b Seismic Event Screening Analysis
NASA Astrophysics Data System (ADS)
Anderson, D. N.; Patton, H. J.; Taylor, S. R.; Bonner, J. L.; Selby, N. D.
2014-03-01
The Comprehensive Nuclear-Test-Ban Treaty (CTBT), a global ban on nuclear explosions, is currently in a ratification phase. Under the CTBT, an International Monitoring System (IMS) of seismic, hydroacoustic, infrasonic and radionuclide sensors is operational, and the data from the IMS is analysed by the International Data Centre (IDC). The IDC provides CTBT signatories basic seismic event parameters and a screening analysis indicating whether an event exhibits explosion characteristics (for example, shallow depth). An important component of the screening analysis is a statistical test of the null hypothesis H 0: explosion characteristics using empirical measurements of seismic energy (magnitudes). The established magnitude used for event size is the body-wave magnitude (denoted m b) computed from the initial segment of a seismic waveform. IDC screening analysis is applied to events with m b greater than 3.5. The Rayleigh wave magnitude (denoted M S) is a measure of later arriving surface wave energy. Magnitudes are measurements of seismic energy that include adjustments (physical correction model) for path and distance effects between event and station. Relative to m b, earthquakes generally have a larger M S magnitude than explosions. This article proposes a hypothesis test (screening analysis) using M S and m b that expressly accounts for physical correction model inadequacy in the standard error of the test statistic. With this hypothesis test formulation, the 2009 Democratic Peoples Republic of Korea announced nuclear weapon test fails to reject the null hypothesis H 0: explosion characteristics.
Substorm associated radar auroral surges: a statistical study and possible generation model
NASA Astrophysics Data System (ADS)
Shand, B. A.; Lester, M.; Yeoman, T. K.
1998-04-01
Substorm-associated radar auroral surges (SARAS) are a short lived (15-90 minutes) and spatially localised (~5° of latitude) perturbation of the plasma convection pattern observed within the auroral E-region. The understanding of such phenomena has important ramifications for the investigation of the larger scale plasma convection and ultimately the coupling of the solar wind, magnetosphere and ionosphere system. A statistical investigation is undertaken of SARAS, observed by the Sweden And Britain Radar Experiment (SABRE), in order to provide a more extensive examination of the local time occurrence and propagation characteristics of the events. The statistical analysis has determined a local time occurrence of observations between 1420 MLT and 2200 MLT with a maximum occurrence centred around 1700 MLT. The propagation velocity of the SARAS feature through the SABRE field of view was found to be predominately L-shell aligned with a velocity centred around 1750 m s-1 and within the range 500 m s-1 and 3500 m s-1. This comprehensive examination of the SARAS provides the opportunity to discuss, qualitatively, a possible generation mechanism for SARAS based on a proposed model for the production of a similar phenomenon referred to as sub-auroral ion drifts (SAIDs). The results of the comparison suggests that SARAS may result from a similar geophysical mechanism to that which produces SAID events, but probably occurs at a different time in the evolution of the event.
TREATMENT SWITCHING: STATISTICAL AND DECISION-MAKING CHALLENGES AND APPROACHES.
Latimer, Nicholas R; Henshall, Chris; Siebert, Uwe; Bell, Helen
2016-01-01
Treatment switching refers to the situation in a randomized controlled trial where patients switch from their randomly assigned treatment onto an alternative. Often, switching is from the control group onto the experimental treatment. In this instance, a standard intention-to-treat analysis does not identify the true comparative effectiveness of the treatments under investigation. We aim to describe statistical methods for adjusting for treatment switching in a comprehensible way for nonstatisticians, and to summarize views on these methods expressed by stakeholders at the 2014 Adelaide International Workshop on Treatment Switching in Clinical Trials. We describe three statistical methods used to adjust for treatment switching: marginal structural models, two-stage adjustment, and rank preserving structural failure time models. We draw upon discussion heard at the Adelaide International Workshop to explore the views of stakeholders on the acceptability of these methods. Stakeholders noted that adjustment methods are based on assumptions, the validity of which may often be questionable. There was disagreement on the acceptability of adjustment methods, but consensus that when these are used, they should be justified rigorously. The utility of adjustment methods depends upon the decision being made and the processes used by the decision-maker. Treatment switching makes estimating the true comparative effect of a new treatment challenging. However, many decision-makers have reservations with adjustment methods. These, and how they affect the utility of adjustment methods, require further exploration. Further technical work is required to develop adjustment methods to meet real world needs, to enhance their acceptability to decision-makers.
NASA Astrophysics Data System (ADS)
Singh, Jitendra; Sekharan, Sheeba; Karmakar, Subhankar; Ghosh, Subimal; Zope, P. E.; Eldho, T. I.
2017-04-01
Mumbai, the commercial and financial capital of India, experiences incessant annual rain episodes, mainly attributable to erratic rainfall pattern during monsoons and urban heat-island effect due to escalating urbanization, leading to increasing vulnerability to frequent flooding. After the infamous episode of 2005 Mumbai torrential rains when only two rain gauging stations existed, the governing civic body, the Municipal Corporation of Greater Mumbai (MCGM) came forward with an initiative to install 26 automatic weather stations (AWS) in June 2006 (MCGM 2007), which later increased to 60 AWS. A comprehensive statistical analysis to understand the spatio-temporal pattern of rainfall over Mumbai or any other coastal city in India has never been attempted earlier. In the current study, a thorough analysis of available rainfall data for 2006-2014 from these stations was performed; the 2013-2014 sub-hourly data from 26 AWS was found useful for further analyses due to their consistency and continuity. Correlogram cloud indicated no pattern of significant correlation when we considered the closest to the farthest gauging station from the base station; this impression was also supported by the semivariogram plots. Gini index values, a statistical measure of temporal non-uniformity, were found above 0.8 in visible majority showing an increasing trend in most gauging stations; this sufficiently led us to conclude that inconsistency in daily rainfall was gradually increasing with progress in monsoon. Interestingly, night rainfall was lesser compared to daytime rainfall. The pattern-less high spatio-temporal variation observed in Mumbai rainfall data signifies the futility of independently applying advanced statistical techniques, and thus calls for simultaneous inclusion of physics-centred models such as different meso-scale numerical weather prediction systems, particularly the Weather Research and Forecasting (WRF) model.
Selecting the optimum plot size for a California design-based stream and wetland mapping program.
Lackey, Leila G; Stein, Eric D
2014-04-01
Accurate estimates of the extent and distribution of wetlands and streams are the foundation of wetland monitoring, management, restoration, and regulatory programs. Traditionally, these estimates have relied on comprehensive mapping. However, this approach is prohibitively resource-intensive over large areas, making it both impractical and statistically unreliable. Probabilistic (design-based) approaches to evaluating status and trends provide a more cost-effective alternative because, compared with comprehensive mapping, overall extent is inferred from mapping a statistically representative, randomly selected subset of the target area. In this type of design, the size of sample plots has a significant impact on program costs and on statistical precision and accuracy; however, no consensus exists on the appropriate plot size for remote monitoring of stream and wetland extent. This study utilized simulated sampling to assess the performance of four plot sizes (1, 4, 9, and 16 km(2)) for three geographic regions of California. Simulation results showed smaller plot sizes (1 and 4 km(2)) were most efficient for achieving desired levels of statistical accuracy and precision. However, larger plot sizes were more likely to contain rare and spatially limited wetland subtypes. Balancing these considerations led to selection of 4 km(2) for the California status and trends program.
A fully probabilistic approach to extreme rainfall modeling
NASA Astrophysics Data System (ADS)
Coles, Stuart; Pericchi, Luis Raúl; Sisson, Scott
2003-03-01
It is an embarrassingly frequent experience that statistical practice fails to foresee historical disasters. It is all too easy to blame global trends or some sort of external intervention, but in this article we argue that statistical methods that do not take comprehensive account of the uncertainties involved in both model and predictions, are bound to produce an over-optimistic appraisal of future extremes that is often contradicted by observed hydrological events. Based on the annual and daily rainfall data on the central coast of Venezuela, different modeling strategies and inference approaches show that the 1999 rainfall which caused the worst environmentally related tragedy in Venezuelan history was extreme, but not implausible given the historical evidence. We follow in turn a classical likelihood and Bayesian approach, arguing that the latter is the most natural approach for taking into account all uncertainties. In each case we emphasize the importance of making inference on predicted levels of the process rather than model parameters. Our most detailed model comprises of seasons with unknown starting points and durations for the extremes of daily rainfall whose behavior is described using a standard threshold model. Based on a Bayesian analysis of this model, so that both prediction uncertainty and process heterogeneity are properly modeled, we find that the 1999 event has a sizeable probability which implies that such an occurrence within a reasonably short time horizon could have been anticipated. Finally, since accumulation of extreme rainfall over several days is an additional difficulty—and indeed, the catastrophe of 1999 was exaggerated by heavy rainfall on successive days—we examine the effect of timescale on our broad conclusions, finding results to be broadly similar across different choices.
PinAPL-Py: A comprehensive web-application for the analysis of CRISPR/Cas9 screens.
Spahn, Philipp N; Bath, Tyler; Weiss, Ryan J; Kim, Jihoon; Esko, Jeffrey D; Lewis, Nathan E; Harismendy, Olivier
2017-11-20
Large-scale genetic screens using CRISPR/Cas9 technology have emerged as a major tool for functional genomics. With its increased popularity, experimental biologists frequently acquire large sequencing datasets for which they often do not have an easy analysis option. While a few bioinformatic tools have been developed for this purpose, their utility is still hindered either due to limited functionality or the requirement of bioinformatic expertise. To make sequencing data analysis of CRISPR/Cas9 screens more accessible to a wide range of scientists, we developed a Platform-independent Analysis of Pooled Screens using Python (PinAPL-Py), which is operated as an intuitive web-service. PinAPL-Py implements state-of-the-art tools and statistical models, assembled in a comprehensive workflow covering sequence quality control, automated sgRNA sequence extraction, alignment, sgRNA enrichment/depletion analysis and gene ranking. The workflow is set up to use a variety of popular sgRNA libraries as well as custom libraries that can be easily uploaded. Various analysis options are offered, suitable to analyze a large variety of CRISPR/Cas9 screening experiments. Analysis output includes ranked lists of sgRNAs and genes, and publication-ready plots. PinAPL-Py helps to advance genome-wide screening efforts by combining comprehensive functionality with user-friendly implementation. PinAPL-Py is freely accessible at http://pinapl-py.ucsd.edu with instructions and test datasets.
Ornaghi, Veronica; Pepe, Alessandro; Grazzani, Ilaria
2016-01-01
Emotion comprehension (EC) is known to be a key correlate and predictor of prosociality from early childhood. In the present study, we examined this relationship within the broad theoretical construct of social understanding which includes a number of socio-emotional skills, as well as cognitive and linguistic abilities. Theory of mind, especially false-belief understanding, has been found to be positively correlated with both EC and prosocial orientation. Similarly, language ability is known to play a key role in children's socio-emotional development. The combined contribution of false-belief understanding and language to explaining the relationship between EC and prosociality has yet to be investigated. Thus, in the current study, we conducted an in-depth exploration of how preschoolers' false-belief understanding and language ability each contribute to modeling the relationship between children's comprehension of emotion and their disposition to act prosocially toward others, after controlling for age and gender. Participants were 101 4- to 6-year-old children (54% boys), who were administered measures of language ability, false-belief understanding, EC and prosocial orientation. Multiple mediation analysis of the data suggested that false-belief understanding and language ability jointly and fully mediated the effect of preschoolers' EC on their prosocial orientation. Analysis of covariates revealed that gender exerted no statistically significant effect, while age had a trivial positive effect. Theoretical and practical implications of the findings are discussed.
Makino, Elizabeth T; Kadoya, Kuniko; Sigler, Monya L; Hino, Peter D; Mehta, Rahul C
2016-12-01
Pigmentary changes in people of different ethnic origins are controlled by slight variations in key biological pathways leading to different outcomes from the same treatment. It is important to develop and test products for desired outcomes in varying ethnic populations. To develop a comprehensive product (LYT2) that affects all major biological pathways controlling pigmentation and test for clinical efficacy and safety in different ethnic populations. A thorough analysis of biological pathways was used to identify ingredient combinations for LYT2 that provided optimal melanin reduction in a 3-D skin model. Expression of four key genes for melanogenesis, TYR, TYRP-1, DCT, and MITF was analyzed by qPCR. Clinical study was conducted to compare the efficacy and tolerability of LYT2 against 4% hydroquinone (HQ). Average melanin suppression by LYT2 in 7 independent experiments was 45%. All four key genes show significant down- regulation of expression. LYT2 provided statistically significant reductions in mean overall hyperpigmentation grades as early as week 2 compared to baseline, with continued significant improvements through week 12 in all ethnic groups tested. We have successfully combined management of 6 categories of pathways related to melanogenesis: melanocyte activation, melanosome development, melanin production, melanin distribution, keratinocyte turnover, and barrier function to create a comprehensive HQ-free product. The outcome clearly shows greater pigmentation control with LYT2 compared to other HQ-free products in skin tissue models and earlier control in clinical studies compared to 4% HQ. Clinical study shows pigmentation control benefits of LYT2 in people of Caucasian, Hispanic, and African ethnic origins. J Drugs Dermatol. 2016;15(12):1562-1570.
Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses.
Liu, Ruijie; Holik, Aliaksei Z; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E; Asselin-Labat, Marie-Liesse; Smyth, Gordon K; Ritchie, Matthew E
2015-09-03
Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean-variance relationship of the log-counts-per-million using 'voom'. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source 'limma' package. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Banerjee, Imon; Malladi, Sadhika; Lee, Daniela; Depeursinge, Adrien; Telli, Melinda; Lipson, Jafi; Golden, Daniel; Rubin, Daniel L
2018-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is sensitive but not specific to determining treatment response in early stage triple-negative breast cancer (TNBC) patients. We propose an efficient computerized technique for assessing treatment response, specifically the residual tumor (RT) status and pathological complete response (pCR), in response to neoadjuvant chemotherapy. The proposed approach is based on Riesz wavelet analysis of pharmacokinetic maps derived from noninvasive DCE-MRI scans, obtained before and after treatment. We compared the performance of Riesz features with the traditional gray level co-occurrence matrices and a comprehensive characterization of the lesion that includes a wide range of quantitative features (e.g., shape and boundary). We investigated a set of predictive models ([Formula: see text]) incorporating distinct combinations of quantitative characterizations and statistical models at different time points of the treatment and some area under the receiver operating characteristic curve (AUC) values we reported are above 0.8. The most efficient models are based on first-order statistics and Riesz wavelets, which predicted RT with an AUC value of 0.85 and pCR with an AUC value of 0.83, improving results reported in a previous study by [Formula: see text]. Our findings suggest that Riesz texture analysis of TNBC lesions can be considered a potential framework for optimizing TNBC patient care.
3Drefine: an interactive web server for efficient protein structure refinement.
Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin
2016-07-08
3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
The Implementation of C-ID, R2D2 Model on Learning Reading Comprehension
ERIC Educational Resources Information Center
Rayanto, Yudi Hari; Rusmawan, Putu Ngurah
2016-01-01
The purposes of this research are to find out, (1) whether C-ID, R2D2 model is effective to be implemented on learning Reading comprehension, (2) college students' activity during the implementation of C-ID, R2D2 model on learning Reading comprehension, and 3) college students' learning achievement during the implementation of C-ID, R2D2 model on…
Swetha, Jonnalagadda Laxmi; Arpita, Ramisetti; Srikanth, Chintalapani; Nutalapati, Rajasekhar
2014-01-01
Background: Biostatistics is an integral part of research protocols. In any field of inquiry or investigation, data obtained is subsequently classified, analyzed and tested for accuracy by statistical methods. Statistical analysis of collected data, thus, forms the basis for all evidence-based conclusions. Aim: The aim of this study is to evaluate the cognition, comprehension and application of biostatistics in research among post graduate students in Periodontics, in India. Materials and Methods: A total of 391 post graduate students registered for a master's course in periodontics at various dental colleges across India were included in the survey. Data regarding the level of knowledge, understanding and its application in design and conduct of the research protocol was collected using a dichotomous questionnaire. A descriptive statistics was used for data analysis. Results: Nearly 79.2% students were aware of the importance of biostatistics in research, 55-65% were familiar with MS-EXCEL spreadsheet for graphical representation of data and with the statistical softwares available on the internet, 26.0% had biostatistics as mandatory subject in their curriculum, 9.5% tried to perform statistical analysis on their own while 3.0% were successful in performing statistical analysis of their studies on their own. Conclusion: Biostatistics should play a central role in planning, conduct, interim analysis, final analysis and reporting of periodontal research especially by the postgraduate students. Indian postgraduate students in periodontics are aware of the importance of biostatistics in research but the level of understanding and application is still basic and needs to be addressed. PMID:24744547
Low back pain in 17 countries, a Rasch analysis of the ICF core set for low back pain.
Røe, Cecilie; Bautz-Holter, Erik; Cieza, Alarcos
2013-03-01
Previous studies indicate that a worldwide measurement tool may be developed based on the International Classification of Functioning Disability and Health (ICF) Core Sets for chronic conditions. The aim of the present study was to explore the possibility of constructing a cross-cultural measurement of functioning for patients with low back pain (LBP) on the basis of the Comprehensive ICF Core Set for LBP and to evaluate the properties of the ICF Core Set. The Comprehensive ICF Core Set for LBP was scored by health professionals for 972 patients with LBP from 17 countries. Qualifier levels of the categories, invariance across age, sex and countries, construct validity and the ordering of the categories in the components of body function, body structure, activities and participation were explored by Rasch analysis. The item-trait χ2-statistics showed that the 53 categories in the ICF Core Set for LBP did not fit the Rasch model (P<0.001). The main challenge was the invariance in the responses according to country. Analysis of the four countries with the largest sample sizes indicated that the data from Germany fit the Rasch model, and the data from Norway, Serbia and Kuwait in terms of the components of body functions and activities and participation also fit the model. The component of body functions and activity and participation had a negative mean location, -2.19 (SD 1.19) and -2.98 (SD 1.07), respectively. The negative location indicates that the ICF Core Set reflects patients with a lower level of function than the present patient sample. The present results indicate that it may be possible to construct a clinical measure of function on the basis of the Comprehensive ICF Core Set for LBP by calculating country-specific scores before pooling the data.
NASA Astrophysics Data System (ADS)
Lockwood, Timothy A.
Federal legislative changes in 2006 no longer entitle cogeneration project financings by law to receive the benefit of a power purchase agreement underwritten by an investment-grade investor-owned utility. Consequently, this research explored the need for a new market-risk model for future cogeneration and combined heat and power (CHP) project financing. CHP project investment represents a potentially enormous energy efficiency benefit through its application by reducing fossil fuel use up to 55% when compared to traditional energy generation, and concurrently eliminates constituent air emissions up to 50%, including global warming gases. As a supplemental approach to a comprehensive technical analysis, a quantitative multivariate modeling was also used to test the statistical validity and reliability of host facility energy demand and CHP supply ratios in predicting the economic performance of CHP project financing. The resulting analytical models, although not statistically reliable at this time, suggest a radically simplified CHP design method for future profitable CHP investments using four easily attainable energy ratios. This design method shows that financially successful CHP adoption occurs when the average system heat-to-power-ratio supply is less than or equal to the average host-convertible-energy-ratio, and when the average nominally-rated capacity is less than average host facility-load-factor demands. New CHP investments can play a role in solving the world-wide problem of accommodating growing energy demand while preserving our precious and irreplaceable air quality for future generations.
Tighe, Elizabeth L; Schatschneider, Christopher
2016-07-01
The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in adult basic education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological awareness and vocabulary knowledge at multiple points (quantiles) along the continuous distribution of reading comprehension. To demonstrate the efficacy of our multiple quantile regression analysis, we compared and contrasted our results with a traditional multiple regression analytic approach. Our results indicated that morphological awareness and vocabulary knowledge accounted for a large portion of the variance (82%-95%) in reading comprehension skills across all quantiles. Morphological awareness exhibited the greatest unique predictive ability at lower levels of reading comprehension whereas vocabulary knowledge exhibited the greatest unique predictive ability at higher levels of reading comprehension. These results indicate the utility of using multiple quantile regression to assess trajectories of component skills across multiple levels of reading comprehension. The implications of our findings for ABE programs are discussed. © Hammill Institute on Disabilities 2014.
1983-09-20
comprehensively studied. Among those aspects in need of further observational descriptions are (1) morphology (a description of the various WLF forms in terms of...Sacramento Peak, we compiled the list O ourselves. To the best of our knowledge the final list of events is comprehensive , although we have recently...and Conway, M. (1950) The solar, flare of 1949 November 19, Observatory .70.77. 33. Porret, M. (1952) Communications Ecrites : Soelil, I’Astronomie
Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos
2017-11-09
Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos
Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less
Point-based and model-based geolocation analysis of airborne laser scanning data
NASA Astrophysics Data System (ADS)
Sefercik, Umut Gunes; Buyuksalih, Gurcan; Jacobsen, Karsten; Alkan, Mehmet
2017-01-01
Airborne laser scanning (ALS) is one of the most effective remote sensing technologies providing precise three-dimensional (3-D) dense point clouds. A large-size ALS digital surface model (DSM) covering the whole Istanbul province was analyzed by point-based and model-based comprehensive statistical approaches. Point-based analysis was performed using checkpoints on flat areas. Model-based approaches were implemented in two steps as strip to strip comparing overlapping ALS DSMs individually in three subareas and comparing the merged ALS DSMs with terrestrial laser scanning (TLS) DSMs in four other subareas. In the model-based approach, the standard deviation of height and normalized median absolute deviation were used as the accuracy indicators combined with the dependency of terrain inclination. The results demonstrate that terrain roughness has a strong impact on the vertical accuracy of ALS DSMs. From the relative horizontal shifts determined and partially improved by merging the overlapping strips and comparison of the ALS, and the TLS, data were found not to be negligible. The analysis of ALS DSM in relation to TLS DSM allowed us to determine the characteristics of the DSM in detail.
Comprehension of Idioms in Turkish Aphasic Participants.
Aydin, Burcu; Barin, Muzaffer; Yagiz, Oktay
2017-12-01
Brain damaged participants offer an opportunity to evaluate the cognitive and linguistic processes and make assumptions about how the brain works. Cognitive linguists have been investigating the underlying mechanisms of idiom comprehension to unravel the ongoing debate on hemispheric specialization in figurative language comprehension. The aim of this study is to evaluate and compare the comprehension of idiomatic expressions in left brain damaged (LBD) aphasic, right brain damaged (RBD) and healthy control participants. Idiom comprehension in eleven LBD aphasic participants, ten RBD participants and eleven healthy control participants were assessed with three tasks: String to Picture Matching Task, Literal Sentence Comprehension Task and Oral Idiom Definition Task. The results of the tasks showed that in overall idiom comprehension category, the left brain-damaged aphasic participants interpret idioms more literally compared to right brain-damaged participants. What is more, there is a significant difference in opaque idiom comprehension implying that left brain-damaged aphasic participants perform worse compared to right brain-damaged participants. On the other hand, there is no statistically significant difference in scores of transparent idiom comprehension between the left brain-damaged aphasic and right brain-damaged participants. This result also contribute to the idea that while figurative processing system is damaged in LBD aphasics, the literal comprehension mechanism is spared to some extent. The results of this study support the view that idiom comprehension sites are mainly left lateralized. Furthermore, the results of this study are in consistence with the Giora's Graded Salience Hypothesis.
Improved estimates of fixed reproducible tangible wealth, 1929-95
DOT National Transportation Integrated Search
1997-05-01
This article presents revised estimates of the value of fixed reproducible tangible wealth in the United States for 192995; these estimates incorporate the definitional and statistical : improvements introduced in last years comprehensive revis...
Evaluation of procedures for quality assurance specifications
DOT National Transportation Integrated Search
2004-10-01
The objective of this project was to develop a comprehensive quality assurance (QA) manual, supported by scientific evidence and statistical theory, which provides step-by-step procedures and instructions for developing effective and efficient QA spe...
NASA Astrophysics Data System (ADS)
Guadagnini, A.; Riva, M.; Neuman, S. P.
2016-12-01
Environmental quantities such as log hydraulic conductivity (or transmissivity), Y(x) = ln K(x), and their spatial (or temporal) increments, ΔY, are known to be generally non-Gaussian. Documented evidence of such behavior includes symmetry of increment distributions at all separation scales (or lags) between incremental values of Y with sharp peaks and heavy tails that decay asymptotically as lag increases. This statistical scaling occurs in porous as well as fractured media characterized by either one or a hierarchy of spatial correlation scales. In hierarchical media one observes a range of additional statistical ΔY scaling phenomena, all of which are captured comprehensibly by a novel generalized sub-Gaussian (GSG) model. In this model Y forms a mixture Y(x) = U(x) G(x) of single- or multi-scale Gaussian processes G having random variances, U being a non-negative subordinator independent of G. Elsewhere we developed ways to generate unconditional and conditional random realizations of isotropic or anisotropic GSG fields which can be embedded in numerical Monte Carlo flow and transport simulations. Here we present and discuss expressions for probability distribution functions of Y and ΔY as well as their lead statistical moments. We then focus on a simple flow setting of mean uniform steady state flow in an unbounded, two-dimensional domain, exploring ways in which non-Gaussian heterogeneity affects stochastic flow and transport descriptions. Our expressions represent (a) lead order autocovariance and cross-covariance functions of hydraulic head, velocity and advective particle displacement as well as (b) analogues of preasymptotic and asymptotic Fickian dispersion coefficients. We compare them with corresponding expressions developed in the literature for Gaussian Y.
Bednarz, Haley M; Maximo, Jose O; Murdaugh, Donna L; O'Kelley, Sarah; Kana, Rajesh K
2017-06-01
Despite intact decoding ability, deficits in reading comprehension are relatively common in children with autism spectrum disorders (ASD). However, few neuroimaging studies have tested the neural bases of this specific profile of reading deficit in ASD. This fMRI study examined activation and synchronization of the brain's reading network in children with ASD with specific reading comprehension deficits during a word similarities task. Thirteen typically developing children and 18 children with ASD performed the task in the MRI scanner. No statistically significant group differences in functional activation were observed; however, children with ASD showed decreased functional connectivity between the left inferior frontal gyrus (LIFG) and the left inferior occipital gyrus (LIOG). In addition, reading comprehension ability significantly positively predicted functional connectivity between the LIFG and left thalamus (LTHAL) among all subjects. The results of this study provide evidence for altered recruitment of reading-related neural resources in ASD children and suggest specific weaknesses in top-down modulation of semantic processing. Copyright © 2017 Elsevier Inc. All rights reserved.
Mira, William A; Schwanenflugel, Paula J
2013-04-01
The purpose of this study was to determine the effect of oral reading expressiveness on the comprehension of storybooks by 4- and 5-year-old prekindergarten children. The possible impact of prosody on listening comprehension was explored. Ninety-two prekindergarten children (M age = 57.26 months, SD = 3.89 months) listened to an expressive or inexpressive recording of 1 of 2 similar stories. Story comprehension was tested using assessments of both free recall and cued recall. Children showed statistically significantly better cued recall for the expressive readings of stories compared to the inexpressive readings of stories. This effect generalized across stories and when story length was controlled across both expressive and inexpressive versions. The effect of expressiveness on children's free recall was not significant. Highly expressive readings resulted in better comprehension of storybooks by prekindergarten children. Further, because recordings were used, this effect might be attributed to the facilitation of language processing rather than to enhanced social interaction between the reader and the child.
Restorative dentistry productivity of senior students engaged in comprehensive care.
Blalock, John S; Callan, Richard S; Lazarchik, David A; Frank Caughman, W; Looney, Stephen
2012-12-01
In dental education, various clinical delivery models are used to educate dental students. The quantitative and qualitative measures used to assess the outcomes of these models are varied. Georgia Health Sciences University College of Dental Medicine has adopted a version of a general dentistry comprehensive care dental education hybrid model. Outcome assessments were developed to evaluate the effectiveness of this delivery model. The aim of this study was to compare the number of restorative procedures performed by senior dental students under a discipline-based model versus senior student productivity engaged in comprehensive care as part of a hybrid model. The rate of senior students' productivity in performing various restorative procedures was tracked over four years, and a comparison was made. In the first two years, the seniors operated in a discipline-based model, while in the last two years the seniors operated in a comprehensive care hybrid model. The results showed that there was a significant increase in productivity by the students in terms of direct and indirect restorations. This increase in productivity may indicate that the comprehensive care model may be a more productive model, thereby enhancing clinical experiences for the students, improving operating efficiency for the schools, and ultimately increasing clinical income.
NASA Astrophysics Data System (ADS)
Kez, V.; Liu, F.; Consalvi, J. L.; Ströhle, J.; Epple, B.
2016-03-01
The oxy-fuel combustion is a promising CO2 capture technology from combustion systems. This process is characterized by much higher CO2 concentrations in the combustion system compared to that of the conventional air-fuel combustion. To accurately predict the enhanced thermal radiation in oxy-fuel combustion, it is essential to take into account the non-gray nature of gas radiation. In this study, radiation heat transfer in a 3D model gas turbine combustor under two test cases at 20 atm total pressure was calculated by various non-gray gas radiation models, including the statistical narrow-band (SNB) model, the statistical narrow-band correlated-k (SNBCK) model, the wide-band correlated-k (WBCK) model, the full spectrum correlated-k (FSCK) model, and several weighted sum of gray gases (WSGG) models. Calculations of SNB, SNBCK, and FSCK were conducted using the updated EM2C SNB model parameters. Results of the SNB model are considered as the benchmark solution to evaluate the accuracy of the other models considered. Results of SNBCK and FSCK are in good agreement with the benchmark solution. The WBCK model is less accurate than SNBCK or FSCK. Considering the three formulations of the WBCK model, the multiple gases formulation is the best choice regarding the accuracy and computational cost. The WSGG model with the parameters of Bordbar et al. (2014) [20] is the most accurate of the three investigated WSGG models. Use of the gray WSSG formulation leads to significant deviations from the benchmark data and should not be applied to predict radiation heat transfer in oxy-fuel combustion systems. A best practice to incorporate the state-of-the-art gas radiation models for high accuracy of radiation heat transfer calculations at minimal increase in computational cost in CFD simulation of oxy-fuel combustion systems for pressure path lengths up to about 10 bar m is suggested.
Towards A Complete Model Of Photopic Visual Threshold Performance
NASA Astrophysics Data System (ADS)
Overington, I.
1982-02-01
Based on a wide variety of fragmentary evidence taken from psycho-physics, neurophysiology and electron microscopy, it has been possible to put together a very widely applicable conceptual model of photopic visual threshold performance. Such a model is so complex that a single comprehensive mathematical version is excessively cumbersome. It is, however, possible to set up a suite of related mathematical models, each of limited application but strictly known envelope of usage. Such models may be used for assessment of a variety of facets of visual performance when using display imagery, including effects and interactions of image quality, random and discrete display noise, viewing distance, image motion, etc., both for foveal interrogation tasks and for visual search tasks. The specific model may be selected from the suite according to the assessment task in hand. The paper discusses in some depth the major facets of preperceptual visual processing and their interaction with instrumental image quality and noise. It then highlights the statistical nature of visual performance before going on to consider a number of specific mathematical models of partial visual function. Where appropriate, these are compared with widely popular empirical models of visual function.
ERIC Educational Resources Information Center
Tighe, Elizabeth L.; Wagner, Richard K.; Schatschneider, Christopher
2015-01-01
This study demonstrates the utility of applying a causal indicator modeling framework to investigate important predictors of reading comprehension in third, seventh, and tenth grade students. The results indicated that a 4-factor multiple indicator multiple indicator cause (MIMIC) model of reading comprehension provided adequate fit at each grade…
Individualized Prediction of Reading Comprehension Ability Using Gray Matter Volume.
Cui, Zaixu; Su, Mengmeng; Li, Liangjie; Shu, Hua; Gong, Gaolang
2018-05-01
Reading comprehension is a crucial reading skill for learning and putatively contains 2 key components: reading decoding and linguistic comprehension. Current understanding of the neural mechanism underlying these reading comprehension components is lacking, and whether and how neuroanatomical features can be used to predict these 2 skills remain largely unexplored. In the present study, we analyzed a large sample from the Human Connectome Project (HCP) dataset and successfully built multivariate predictive models for these 2 skills using whole-brain gray matter volume features. The results showed that these models effectively captured individual differences in these 2 skills and were able to significantly predict these components of reading comprehension for unseen individuals. The strict cross-validation using the HCP cohort and another independent cohort of children demonstrated the model generalizability. The identified gray matter regions contributing to the skill prediction consisted of a wide range of regions covering the putative reading, cerebellum, and subcortical systems. Interestingly, there were gender differences in the predictive models, with the female-specific model overestimating the males' abilities. Moreover, the identified contributing gray matter regions for the female-specific and male-specific models exhibited considerable differences, supporting a gender-dependent neuroanatomical substrate for reading comprehension.
ERIC Educational Resources Information Center
Lane, Kathleen Lynne; Oakes, Wendy Peia; Jenkins, Abbie; Menzies, Holly Mariah; Kalberg, Jemma Robertson
2014-01-01
Comprehensive, integrated, three-tiered models are context specific and developed by school-site teams according to the core values held by the school community. In this article, the authors provide a step-by-step, team-based process for designing comprehensive, integrated, three-tiered models of prevention that integrate academic, behavioral, and…
Zhang, Yiming; Jin, Quan; Wang, Shuting; Ren, Ren
2011-05-01
The mobile behavior of 1481 peptides in ion mobility spectrometry (IMS), which are generated by protease digestion of the Drosophila melanogaster proteome, is modeled and predicted based on two different types of characterization methods, i.e. sequence-based approach and structure-based approach. In this procedure, the sequence-based approach considers both the amino acid composition of a peptide and the local environment profile of each amino acid in the peptide; the structure-based approach is performed with the CODESSA protocol, which regards a peptide as a common organic compound and generates more than 200 statistically significant variables to characterize the whole structure profile of a peptide molecule. Subsequently, the nonlinear support vector machine (SVM) and Gaussian process (GP) as well as linear partial least squares (PLS) regression is employed to correlate the structural parameters of the characterizations with the IMS drift times of these peptides. The obtained quantitative structure-spectrum relationship (QSSR) models are evaluated rigorously and investigated systematically via both one-deep and two-deep cross-validations as well as the rigorous Monte Carlo cross-validation (MCCV). We also give a comprehensive comparison on the resulting statistics arising from the different combinations of variable types with modeling methods and find that the sequence-based approach can give the QSSR models with better fitting ability and predictive power but worse interpretability than the structure-based approach. In addition, though the QSSR modeling using sequence-based approach is not needed for the preparation of the minimization structures of peptides before the modeling, it would be considerably efficient as compared to that using structure-based approach. Copyright © 2011 Elsevier Ltd. All rights reserved.
Statistical properties of the radiation from SASE FEL operating in the linear regime
NASA Astrophysics Data System (ADS)
Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.
1998-02-01
The paper presents comprehensive analysis of statistical properties of the radiation from self amplified spontaneous emission (SASE) free electron laser operating in linear mode. The investigation has been performed in a one-dimensional approximation, assuming the electron pulse length to be much larger than a coherence length of the radiation. The following statistical properties of the SASE FEL radiation have been studied: field correlations, distribution of the radiation energy after monochromator installed at the FEL amplifier exit and photoelectric counting statistics of SASE FEL radiation. It is shown that the radiation from SASE FEL operating in linear regime possesses all the features corresponding to completely chaotic polarized radiation.
Optimal sample sizes for the design of reliability studies: power consideration.
Shieh, Gwowen
2014-09-01
Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirazi, M.A.; Davis, L.R.
To obtain improved prediction of heated plume characteristics from a surface jet, an integral analysis computer model was modified and a comprehensive set of field and laboratory data available from the literature was gathered, analyzed, and correlated for estimating the magnitude of certain coefficients that are normally introduced in these analyses to achieve closure. The parameters so estimated include the coefficients for entrainment, turbulent exchange, drag, and shear. Since there appeared considerable scatter in the data, even after appropriate subgrouping to narrow the influence of various flow conditions on the data, only statistical procedures could be applied to find themore » best fit. This and other analyses of its type have been widely used in industry and government for the prediction of thermal plumes from steam power plants. Although the present model has many shortcomings, a recent independent and exhaustive assessment of such predictions revealed that in comparison with other analyses of its type the present analysis predicts the field situations more successfully.« less
Analysis of cigarette purchase task instrument data with a left-censored mixed effects model.
Liao, Wenjie; Luo, Xianghua; Le, Chap T; Chu, Haitao; Epstein, Leonard H; Yu, Jihnhee; Ahluwalia, Jasjit S; Thomas, Janet L
2013-04-01
The drug purchase task is a frequently used instrument for measuring the relative reinforcing efficacy (RRE) of a substance, a central concept in psychopharmacological research. Although a purchase task instrument, such as the cigarette purchase task (CPT), provides a comprehensive and inexpensive way to assess various aspects of a drug's RRE, the application of conventional statistical methods to data generated from such an instrument may not be adequate by simply ignoring or replacing the extra zeros or missing values in the data with arbitrary small consumption values, for example, 0.001. We applied the left-censored mixed effects model to CPT data from a smoking cessation study of college students and demonstrated its superiority over the existing methods with simulation studies. Theoretical implications of the findings, limitations of the proposed method, and future directions of research are also discussed.
TASI: A software tool for spatial-temporal quantification of tumor spheroid dynamics.
Hou, Yue; Konen, Jessica; Brat, Daniel J; Marcus, Adam I; Cooper, Lee A D
2018-05-08
Spheroid cultures derived from explanted cancer specimens are an increasingly utilized resource for studying complex biological processes like tumor cell invasion and metastasis, representing an important bridge between the simplicity and practicality of 2-dimensional monolayer cultures and the complexity and realism of in vivo animal models. Temporal imaging of spheroids can capture the dynamics of cell behaviors and microenvironments, and when combined with quantitative image analysis methods, enables deep interrogation of biological mechanisms. This paper presents a comprehensive open-source software framework for Temporal Analysis of Spheroid Imaging (TASI) that allows investigators to objectively characterize spheroid growth and invasion dynamics. TASI performs spatiotemporal segmentation of spheroid cultures, extraction of features describing spheroid morpho-phenotypes, mathematical modeling of spheroid dynamics, and statistical comparisons of experimental conditions. We demonstrate the utility of this tool in an analysis of non-small cell lung cancer spheroids that exhibit variability in metastatic and proliferative behaviors.
Analysis of Cigarette Purchase Task Instrument Data with a Left-Censored Mixed Effects Model
Liao, Wenjie; Luo, Xianghua; Le, Chap; Chu, Haitao; Epstein, Leonard H.; Yu, Jihnhee; Ahluwalia, Jasjit S.; Thomas, Janet L.
2015-01-01
The drug purchase task is a frequently used instrument for measuring the relative reinforcing efficacy (RRE) of a substance, a central concept in psychopharmacological research. While a purchase task instrument, such as the cigarette purchase task (CPT), provides a comprehensive and inexpensive way to assess various aspects of a drug’s RRE, the application of conventional statistical methods to data generated from such an instrument may not be adequate by simply ignoring or replacing the extra zeros or missing values in the data with arbitrary small consumption values, e.g. 0.001. We applied the left-censored mixed effects model to CPT data from a smoking cessation study of college students and demonstrated its superiority over the existing methods with simulation studies. Theoretical implications of the findings, limitations of the proposed method and future directions of research are also discussed. PMID:23356731
Moving towards an understanding of disability in older US stroke survivors
Brenner, Allison B.; Burke, James F.; Skolarus, Lesli E.
2017-01-01
Objectives We test a comprehensive model of disability in older stroke survivors, and determine the relative contribution of neighborhood, economic, psychological and medical factors to disability. Methods The sample consisted of 728 stroke survivors from the National Health and Aging Trends Study (NHATS), who were 65 years and older living in community settings or residential care. Confirmatory factor analysis and structural equation modeling were used to test relationships between neighborhood, socioeconomic, psychological and medical factors and disability. Results Economic and medical context were associated with disability directly and indirectly through physical impairment. Neighborhood context was associated with disability, but was only marginally statistically significant (p=0.05). The effect of economic and neighborhood factors was small compared to that of medical factors. Discussion Neighborhood and economic factors account for a portion of the variance in disability among older stroke survivors beyond that of medical factors. PMID:27605555
Interpreter of maladies: redescription mining applied to biomedical data analysis.
Waltman, Peter; Pearlman, Alex; Mishra, Bud
2006-04-01
Comprehensive, systematic and integrated data-centric statistical approaches to disease modeling can provide powerful frameworks for understanding disease etiology. Here, one such computational framework based on redescription mining in both its incarnations, static and dynamic, is discussed. The static framework provides bioinformatic tools applicable to multifaceted datasets, containing genetic, transcriptomic, proteomic, and clinical data for diseased patients and normal subjects. The dynamic redescription framework provides systems biology tools to model complex sets of regulatory, metabolic and signaling pathways in the initiation and progression of a disease. As an example, the case of chronic fatigue syndrome (CFS) is considered, which has so far remained intractable and unpredictable in its etiology and nosology. The redescription mining approaches can be applied to the Centers for Disease Control and Prevention's Wichita (KS, USA) dataset, integrating transcriptomic, epidemiological and clinical data, and can also be used to study how pathways in the hypothalamic-pituitary-adrenal axis affect CFS patients.
Advances in segmentation modeling for health communication and social marketing campaigns.
Albrecht, T L; Bryant, C
1996-01-01
Large-scale communication campaigns for health promotion and disease prevention involve analysis of audience demographic and psychographic factors for effective message targeting. A variety of segmentation modeling techniques, including tree-based methods such as Chi-squared Automatic Interaction Detection and logistic regression, are used to identify meaningful target groups within a large sample or population (N = 750-1,000+). Such groups are based on statistically significant combinations of factors (e.g., gender, marital status, and personality predispositions). The identification of groups or clusters facilitates message design in order to address the particular needs, attention patterns, and concerns of audience members within each group. We review current segmentation techniques, their contributions to conceptual development, and cost-effective decision making. Examples from a major study in which these strategies were used are provided from the Texas Women, Infants and Children Program's Comprehensive Social Marketing Program.
Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor
2012-01-01
A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371
Tighe, Elizabeth L.; Schatschneider, Christopher
2016-01-01
This study extended the findings of Tighe and Schatschneider (2015) by investigating the predictive utility of separate dimensions of morphological awareness as well as vocabulary knowledge to reading comprehension in adult basic education (ABE) students. We competed two- and three-factor structural equation models of reading comprehension. A three-factor model of real word morphological awareness, pseudoword morphological awareness, and vocabulary knowledge emerged as the best fit and accounted for 79% of the reading comprehension variance. The results indicated that the constructs contributed jointly to reading comprehension; however, vocabulary knowledge was the only potentially unique predictor (p = 0.052), accounting for an additional 5.6% of the variance. This study demonstrates the feasibility of applying a latent variable modeling approach to examine individual differences in the reading comprehension skills of ABE students. Further, this study replicates the findings of Tighe and Schatschneider (2015) on the importance of differentiating among dimensions of morphological awareness in this population. PMID:26869981
Tighe, Elizabeth L; Schatschneider, Christopher
2016-01-01
This study extended the findings of Tighe and Schatschneider (2015) by investigating the predictive utility of separate dimensions of morphological awareness as well as vocabulary knowledge to reading comprehension in adult basic education (ABE) students. We competed two- and three-factor structural equation models of reading comprehension. A three-factor model of real word morphological awareness, pseudoword morphological awareness, and vocabulary knowledge emerged as the best fit and accounted for 79% of the reading comprehension variance. The results indicated that the constructs contributed jointly to reading comprehension; however, vocabulary knowledge was the only potentially unique predictor (p = 0.052), accounting for an additional 5.6% of the variance. This study demonstrates the feasibility of applying a latent variable modeling approach to examine individual differences in the reading comprehension skills of ABE students. Further, this study replicates the findings of Tighe and Schatschneider (2015) on the importance of differentiating among dimensions of morphological awareness in this population.
Multilingualism and fMRI: Longitudinal Study of Second Language Acquisition
Andrews, Edna; Frigau, Luca; Voyvodic-Casabo, Clara; Voyvodic, James; Wright, John
2013-01-01
BOLD fMRI is often used for the study of human language. However, there are still very few attempts to conduct longitudinal fMRI studies in the study of language acquisition by measuring auditory comprehension and reading. The following paper is the first in a series concerning a unique longitudinal study devoted to the analysis of bi- and multilingual subjects who are: (1) already proficient in at least two languages; or (2) are acquiring Russian as a second/third language. The focus of the current analysis is to present data from the auditory sections of a set of three scans acquired from April, 2011 through April, 2012 on a five-person subject pool who are learning Russian during the study. All subjects were scanned using the same protocol for auditory comprehension on the same General Electric LX 3T Signa scanner in Duke University Hospital. Using a multivariate analysis of covariance (MANCOVA) for statistical analysis, proficiency measurements are shown to correlate significantly with scan results in the Russian conditions over time. The importance of both the left and right hemispheres in language processing is discussed. Special attention is devoted to the importance of contextualizing imaging data with corresponding behavioral and empirical testing data using a multivariate analysis of variance. This is the only study to date that includes: (1) longitudinal fMRI data with subject-based proficiency and behavioral data acquired in the same time frame; and (2) statistical modeling that demonstrates the importance of covariate language proficiency data for understanding imaging results of language acquisition. PMID:24961428
Multilingualism and fMRI: Longitudinal Study of Second Language Acquisition.
Andrews, Edna; Frigau, Luca; Voyvodic-Casabo, Clara; Voyvodic, James; Wright, John
2013-05-28
BOLD fMRI is often used for the study of human language. However, there are still very few attempts to conduct longitudinal fMRI studies in the study of language acquisition by measuring auditory comprehension and reading. The following paper is the first in a series concerning a unique longitudinal study devoted to the analysis of bi- and multilingual subjects who are: (1) already proficient in at least two languages; or (2) are acquiring Russian as a second/third language. The focus of the current analysis is to present data from the auditory sections of a set of three scans acquired from April, 2011 through April, 2012 on a five-person subject pool who are learning Russian during the study. All subjects were scanned using the same protocol for auditory comprehension on the same General Electric LX 3T Signa scanner in Duke University Hospital. Using a multivariate analysis of covariance (MANCOVA) for statistical analysis, proficiency measurements are shown to correlate significantly with scan results in the Russian conditions over time. The importance of both the left and right hemispheres in language processing is discussed. Special attention is devoted to the importance of contextualizing imaging data with corresponding behavioral and empirical testing data using a multivariate analysis of variance. This is the only study to date that includes: (1) longitudinal fMRI data with subject-based proficiency and behavioral data acquired in the same time frame; and (2) statistical modeling that demonstrates the importance of covariate language proficiency data for understanding imaging results of language acquisition.
Introduction to the DISRUPT postprandial database: subjects, studies and methodologies.
Jackson, Kim G; Clarke, Dave T; Murray, Peter; Lovegrove, Julie A; O'Malley, Brendan; Minihane, Anne M; Williams, Christine M
2010-03-01
Dysregulation of lipid and glucose metabolism in the postprandial state are recognised as important risk factors for the development of cardiovascular disease and type 2 diabetes. Our objective was to create a comprehensive, standardised database of postprandial studies to provide insights into the physiological factors that influence postprandial lipid and glucose responses. Data were collated from subjects (n = 467) taking part in single and sequential meal postprandial studies conducted by researchers at the University of Reading, to form the DISRUPT (DIetary Studies: Reading Unilever Postprandial Trials) database. Subject attributes including age, gender, genotype, menopausal status, body mass index, blood pressure and a fasting biochemical profile, together with postprandial measurements of triacylglycerol (TAG), non-esterified fatty acids, glucose, insulin and TAG-rich lipoprotein composition are recorded. A particular strength of the studies is the frequency of blood sampling, with on average 10-13 blood samples taken during each postprandial assessment, and the fact that identical test meal protocols were used in a number of studies, allowing pooling of data to increase statistical power. The DISRUPT database is the most comprehensive postprandial metabolism database that exists worldwide and preliminary analysis of the pooled sequential meal postprandial dataset has revealed both confirmatory and novel observations with respect to the impact of gender and age on the postprandial TAG response. Further analysis of the dataset using conventional statistical techniques along with integrated mathematical models and clustering analysis will provide a unique opportunity to greatly expand current knowledge of the aetiology of inter-individual variability in postprandial lipid and glucose responses.
Intarakamhang, Patrawut; Intarakamhang, Ungsinun
2012-12-24
The Comprehensive Lifestyle Intervention, which integrates psychological and educational intervention, is a program to improve self-efficacy, self-regulation, self-care, body mass index and quality of life of the patients with coronary heart disease during early stages following hospitalization. The purpose of this study was to investigate the effects of the Comprehensive Cardiac Rehabilitation Program affecting psychological factors including self-efficacy, self-regulation, self-care, quality of life (QoL), and body mass index (BMI). This study was a quasi-experimental research with a repeated one group design. Eighty patients with coronary artery disease were recruited from either the Medicine or Surgical Ward at the Phramongkutklao Hospital where the patients joined the Comprehensive Cardiac Rehabilitation Program, which included attending exercising practice and receiving face-to-face counseling while being admitted to the hospital. Telephone counseling was thereafter performed one week after being discharged from the hospital, followed by undergoing individual or group counseling at the Cardiac Rehabilitation Clinic the following week. The follow-up period was performed within six weeks after hospitalization. Data was collected on two occasions before discharging from the hospital (pretest) and six weeks after (post-test) by using the self-efficacy, self-regulation, and self-care questionnaires, as well as the Short Form(SF) -36 (Thai version). The results indicated that by six weeks, 50%, 58.80%, 46.20%, and 72.50% of patients, respectively, had experienced increases with self-efficacy, self-regulation, self-care, and quality of life scores, while 12.50% of patients had decreased their body mass index in comparison with the pretest score. From the paired t-test, the self-efficacy, self-regulation and quality of life scores were statistically significant, having increased to the p<0.01 level; self-care was statistically significant, having increased to the p<0.05 level along with body mass index, which was statistically significant having experienced a decrease to the p<0.01 level.
Efficient model for low-energy transverse beam dynamics in a nine-cell 1.3 GHz cavity
NASA Astrophysics Data System (ADS)
Hellert, Thorsten; Dohlus, Martin; Decking, Winfried
2017-10-01
FLASH and the European XFEL are SASE-FEL user facilities, at which superconducting TESLA cavities are operated in a pulsed mode to accelerate long bunch-trains. Several cavities are powered by one klystron. While the low-level rf system is able to stabilize the vector sum of the accelerating gradient of one rf station sufficiently, the rf parameters of individual cavities vary within the bunch-train. In correlation with misalignments, intrabunch-train trajectory variations are induced. An efficient model is developed to describe the effect at low beam energy, using numerically adjusted transfer matrices and discrete coupler kick coefficients, respectively. Comparison with start-to-end tracking and dedicated experiments at the FLASH injector will be shown. The short computation time of the derived model allows for comprehensive numerical studies on the impact of misalignments and variable rf parameters on the transverse intra-bunch-train beam stability at the injector module. Results from both, statistical multibunch performance studies and the deduction of misalignments from multibunch experiments are presented.